00:00:00.001 Started by upstream project "autotest-nightly" build number 4135 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3497 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.070 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.072 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.117 Fetching changes from the remote Git repository 00:00:00.120 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.195 Using shallow fetch with depth 1 00:00:00.195 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.195 > git --version # timeout=10 00:00:00.263 > git --version # 'git version 2.39.2' 00:00:00.263 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.315 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.315 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.444 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.459 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.471 Checking out Revision 53a1a621557260e3fbfd1fd32ee65ff11a804d5b (FETCH_HEAD) 00:00:07.471 > git config core.sparsecheckout # timeout=10 00:00:07.483 > git read-tree -mu HEAD # timeout=10 00:00:07.500 > git checkout -f 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=5 00:00:07.517 Commit message: "packer: Merge irdmafedora into main fedora image" 00:00:07.518 > git rev-list --no-walk 53a1a621557260e3fbfd1fd32ee65ff11a804d5b # timeout=10 00:00:07.627 [Pipeline] Start of Pipeline 00:00:07.643 [Pipeline] library 00:00:07.644 Loading library shm_lib@master 00:00:07.645 Library shm_lib@master is cached. Copying from home. 00:00:07.662 [Pipeline] node 00:00:07.675 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.677 [Pipeline] { 00:00:07.688 [Pipeline] catchError 00:00:07.689 [Pipeline] { 00:00:07.702 [Pipeline] wrap 00:00:07.711 [Pipeline] { 00:00:07.717 [Pipeline] stage 00:00:07.718 [Pipeline] { (Prologue) 00:00:07.732 [Pipeline] echo 00:00:07.733 Node: VM-host-SM38 00:00:07.738 [Pipeline] cleanWs 00:00:07.747 [WS-CLEANUP] Deleting project workspace... 00:00:07.747 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.753 [WS-CLEANUP] done 00:00:07.930 [Pipeline] setCustomBuildProperty 00:00:08.050 [Pipeline] httpRequest 00:00:08.575 [Pipeline] echo 00:00:08.577 Sorcerer 10.211.164.101 is alive 00:00:08.584 [Pipeline] retry 00:00:08.586 [Pipeline] { 00:00:08.595 [Pipeline] httpRequest 00:00:08.600 HttpMethod: GET 00:00:08.600 URL: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.601 Sending request to url: http://10.211.164.101/packages/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:08.602 Response Code: HTTP/1.1 200 OK 00:00:08.602 Success: Status code 200 is in the accepted range: 200,404 00:00:08.603 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:10.058 [Pipeline] } 00:00:10.075 [Pipeline] // retry 00:00:10.084 [Pipeline] sh 00:00:10.371 + tar --no-same-owner -xf jbp_53a1a621557260e3fbfd1fd32ee65ff11a804d5b.tar.gz 00:00:10.385 [Pipeline] httpRequest 00:00:11.008 [Pipeline] echo 00:00:11.009 Sorcerer 10.211.164.101 is alive 00:00:11.020 [Pipeline] retry 00:00:11.022 [Pipeline] { 00:00:11.037 [Pipeline] httpRequest 00:00:11.042 HttpMethod: GET 00:00:11.042 URL: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:11.043 Sending request to url: http://10.211.164.101/packages/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:00:11.061 Response Code: HTTP/1.1 200 OK 00:00:11.062 Success: Status code 200 is in the accepted range: 200,404 00:00:11.062 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:21.243 [Pipeline] } 00:01:21.264 [Pipeline] // retry 00:01:21.272 [Pipeline] sh 00:01:21.556 + tar --no-same-owner -xf spdk_09cc66129742c68eb8ce46c42225a27c3c933a14.tar.gz 00:01:24.875 [Pipeline] sh 00:01:25.158 + git -C spdk log --oneline -n5 00:01:25.159 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:01:25.159 a67b3561a dpdk: update submodule to include alarm_cancel fix 00:01:25.159 43f6d3385 nvmf: remove use of STAILQ for last_wqe events 00:01:25.159 9645421c5 nvmf: rename nvmf_rdma_qpair_process_ibv_event() 00:01:25.159 e6da32ee1 nvmf: rename nvmf_rdma_send_qpair_async_event() 00:01:25.179 [Pipeline] writeFile 00:01:25.195 [Pipeline] sh 00:01:25.481 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:25.494 [Pipeline] sh 00:01:25.779 + cat autorun-spdk.conf 00:01:25.779 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:25.779 SPDK_TEST_NVME=1 00:01:25.779 SPDK_TEST_FTL=1 00:01:25.779 SPDK_TEST_ISAL=1 00:01:25.779 SPDK_RUN_ASAN=1 00:01:25.779 SPDK_RUN_UBSAN=1 00:01:25.779 SPDK_TEST_XNVME=1 00:01:25.779 SPDK_TEST_NVME_FDP=1 00:01:25.779 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:25.788 RUN_NIGHTLY=1 00:01:25.790 [Pipeline] } 00:01:25.804 [Pipeline] // stage 00:01:25.819 [Pipeline] stage 00:01:25.821 [Pipeline] { (Run VM) 00:01:25.835 [Pipeline] sh 00:01:26.123 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:26.123 + echo 'Start stage prepare_nvme.sh' 00:01:26.123 Start stage prepare_nvme.sh 00:01:26.123 + [[ -n 4 ]] 00:01:26.123 + disk_prefix=ex4 00:01:26.123 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:26.123 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:26.123 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:26.123 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:26.123 ++ SPDK_TEST_NVME=1 00:01:26.123 ++ SPDK_TEST_FTL=1 00:01:26.123 ++ SPDK_TEST_ISAL=1 00:01:26.123 ++ SPDK_RUN_ASAN=1 00:01:26.123 ++ SPDK_RUN_UBSAN=1 00:01:26.123 ++ SPDK_TEST_XNVME=1 00:01:26.123 ++ SPDK_TEST_NVME_FDP=1 00:01:26.123 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:26.123 ++ RUN_NIGHTLY=1 00:01:26.123 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:26.123 + nvme_files=() 00:01:26.123 + declare -A nvme_files 00:01:26.123 + backend_dir=/var/lib/libvirt/images/backends 00:01:26.123 + nvme_files['nvme.img']=5G 00:01:26.123 + nvme_files['nvme-cmb.img']=5G 00:01:26.123 + nvme_files['nvme-multi0.img']=4G 00:01:26.123 + nvme_files['nvme-multi1.img']=4G 00:01:26.123 + nvme_files['nvme-multi2.img']=4G 00:01:26.123 + nvme_files['nvme-openstack.img']=8G 00:01:26.123 + nvme_files['nvme-zns.img']=5G 00:01:26.123 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:26.123 + (( SPDK_TEST_FTL == 1 )) 00:01:26.123 + nvme_files["nvme-ftl.img"]=6G 00:01:26.123 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:26.123 + nvme_files["nvme-fdp.img"]=1G 00:01:26.123 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:26.123 + for nvme in "${!nvme_files[@]}" 00:01:26.123 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:26.123 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.123 + for nvme in "${!nvme_files[@]}" 00:01:26.123 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-ftl.img -s 6G 00:01:26.384 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:26.384 + for nvme in "${!nvme_files[@]}" 00:01:26.384 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:26.384 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.384 + for nvme in "${!nvme_files[@]}" 00:01:26.384 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:26.384 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:26.384 + for nvme in "${!nvme_files[@]}" 00:01:26.384 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:26.384 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.384 + for nvme in "${!nvme_files[@]}" 00:01:26.384 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:26.644 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.644 + for nvme in "${!nvme_files[@]}" 00:01:26.644 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:26.644 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:26.644 + for nvme in "${!nvme_files[@]}" 00:01:26.644 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-fdp.img -s 1G 00:01:26.644 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:26.644 + for nvme in "${!nvme_files[@]}" 00:01:26.644 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:26.644 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:26.644 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:26.644 + echo 'End stage prepare_nvme.sh' 00:01:26.644 End stage prepare_nvme.sh 00:01:26.655 [Pipeline] sh 00:01:26.938 + DISTRO=fedora39 00:01:26.938 + CPUS=10 00:01:26.938 + RAM=12288 00:01:26.938 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:26.938 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex4-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:26.938 00:01:26.938 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:26.938 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:26.938 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:26.938 HELP=0 00:01:26.938 DRY_RUN=0 00:01:26.938 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,/var/lib/libvirt/images/backends/ex4-nvme-fdp.img, 00:01:26.938 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:26.938 NVME_AUTO_CREATE=0 00:01:26.938 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,, 00:01:26.938 NVME_CMB=,,,, 00:01:26.938 NVME_PMR=,,,, 00:01:26.938 NVME_ZNS=,,,, 00:01:26.938 NVME_MS=true,,,, 00:01:26.938 NVME_FDP=,,,on, 00:01:26.938 SPDK_VAGRANT_DISTRO=fedora39 00:01:26.938 SPDK_VAGRANT_VMCPU=10 00:01:26.938 SPDK_VAGRANT_VMRAM=12288 00:01:26.938 SPDK_VAGRANT_PROVIDER=libvirt 00:01:26.938 SPDK_VAGRANT_HTTP_PROXY= 00:01:26.938 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:26.938 SPDK_OPENSTACK_NETWORK=0 00:01:26.938 VAGRANT_PACKAGE_BOX=0 00:01:26.938 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:26.938 FORCE_DISTRO=true 00:01:26.938 VAGRANT_BOX_VERSION= 00:01:26.938 EXTRA_VAGRANTFILES= 00:01:26.938 NIC_MODEL=e1000 00:01:26.938 00:01:26.938 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:26.938 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:29.486 Bringing machine 'default' up with 'libvirt' provider... 00:01:30.057 ==> default: Creating image (snapshot of base box volume). 00:01:30.057 ==> default: Creating domain with the following settings... 00:01:30.057 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1727753302_5723426d5d3a3e1621ef 00:01:30.057 ==> default: -- Domain type: kvm 00:01:30.057 ==> default: -- Cpus: 10 00:01:30.057 ==> default: -- Feature: acpi 00:01:30.057 ==> default: -- Feature: apic 00:01:30.057 ==> default: -- Feature: pae 00:01:30.057 ==> default: -- Memory: 12288M 00:01:30.057 ==> default: -- Memory Backing: hugepages: 00:01:30.057 ==> default: -- Management MAC: 00:01:30.057 ==> default: -- Loader: 00:01:30.057 ==> default: -- Nvram: 00:01:30.057 ==> default: -- Base box: spdk/fedora39 00:01:30.057 ==> default: -- Storage pool: default 00:01:30.057 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1727753302_5723426d5d3a3e1621ef.img (20G) 00:01:30.057 ==> default: -- Volume Cache: default 00:01:30.057 ==> default: -- Kernel: 00:01:30.057 ==> default: -- Initrd: 00:01:30.057 ==> default: -- Graphics Type: vnc 00:01:30.057 ==> default: -- Graphics Port: -1 00:01:30.057 ==> default: -- Graphics IP: 127.0.0.1 00:01:30.057 ==> default: -- Graphics Password: Not defined 00:01:30.057 ==> default: -- Video Type: cirrus 00:01:30.057 ==> default: -- Video VRAM: 9216 00:01:30.057 ==> default: -- Sound Type: 00:01:30.057 ==> default: -- Keymap: en-us 00:01:30.057 ==> default: -- TPM Path: 00:01:30.057 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:30.057 ==> default: -- Command line args: 00:01:30.057 ==> default: -> value=-device, 00:01:30.057 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:30.057 ==> default: -> value=-drive, 00:01:30.057 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:30.057 ==> default: -> value=-device, 00:01:30.057 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:30.057 ==> default: -> value=-device, 00:01:30.057 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:30.057 ==> default: -> value=-drive, 00:01:30.057 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-1-drive0, 00:01:30.057 ==> default: -> value=-device, 00:01:30.057 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.057 ==> default: -> value=-device, 00:01:30.057 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:30.057 ==> default: -> value=-drive, 00:01:30.057 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:30.057 ==> default: -> value=-device, 00:01:30.057 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.057 ==> default: -> value=-drive, 00:01:30.057 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:30.057 ==> default: -> value=-device, 00:01:30.057 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.057 ==> default: -> value=-drive, 00:01:30.057 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:30.057 ==> default: -> value=-device, 00:01:30.057 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.057 ==> default: -> value=-device, 00:01:30.057 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:30.057 ==> default: -> value=-device, 00:01:30.057 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:30.057 ==> default: -> value=-drive, 00:01:30.057 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:30.057 ==> default: -> value=-device, 00:01:30.057 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:30.318 ==> default: Creating shared folders metadata... 00:01:30.318 ==> default: Starting domain. 00:01:32.898 ==> default: Waiting for domain to get an IP address... 00:01:54.872 ==> default: Waiting for SSH to become available... 00:01:54.872 ==> default: Configuring and enabling network interfaces... 00:01:56.255 default: SSH address: 192.168.121.18:22 00:01:56.255 default: SSH username: vagrant 00:01:56.255 default: SSH auth method: private key 00:01:58.798 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:06.949 ==> default: Mounting SSHFS shared folder... 00:02:08.330 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:08.330 ==> default: Checking Mount.. 00:02:09.267 ==> default: Folder Successfully Mounted! 00:02:09.267 00:02:09.267 SUCCESS! 00:02:09.267 00:02:09.267 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:09.267 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:09.267 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:09.267 00:02:09.276 [Pipeline] } 00:02:09.294 [Pipeline] // stage 00:02:09.303 [Pipeline] dir 00:02:09.303 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:02:09.304 [Pipeline] { 00:02:09.317 [Pipeline] catchError 00:02:09.319 [Pipeline] { 00:02:09.330 [Pipeline] sh 00:02:09.610 + vagrant ssh-config --host vagrant 00:02:09.610 + sed -ne '/^Host/,$p' 00:02:09.610 + tee ssh_conf 00:02:12.910 Host vagrant 00:02:12.910 HostName 192.168.121.18 00:02:12.910 User vagrant 00:02:12.910 Port 22 00:02:12.910 UserKnownHostsFile /dev/null 00:02:12.910 StrictHostKeyChecking no 00:02:12.910 PasswordAuthentication no 00:02:12.910 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:12.910 IdentitiesOnly yes 00:02:12.910 LogLevel FATAL 00:02:12.910 ForwardAgent yes 00:02:12.910 ForwardX11 yes 00:02:12.910 00:02:12.927 [Pipeline] withEnv 00:02:12.929 [Pipeline] { 00:02:12.943 [Pipeline] sh 00:02:13.231 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:02:13.231 source /etc/os-release 00:02:13.231 [[ -e /image.version ]] && img=$(< /image.version) 00:02:13.231 # Minimal, systemd-like check. 00:02:13.231 if [[ -e /.dockerenv ]]; then 00:02:13.231 # Clear garbage from the node'\''s name: 00:02:13.231 # agt-er_autotest_547-896 -> autotest_547-896 00:02:13.231 # $HOSTNAME is the actual container id 00:02:13.231 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:13.231 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:13.231 # We can assume this is a mount from a host where container is running, 00:02:13.231 # so fetch its hostname to easily identify the target swarm worker. 00:02:13.231 container="$(< /etc/hostname) ($agent)" 00:02:13.231 else 00:02:13.231 # Fallback 00:02:13.231 container=$agent 00:02:13.231 fi 00:02:13.231 fi 00:02:13.231 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:13.231 ' 00:02:13.505 [Pipeline] } 00:02:13.522 [Pipeline] // withEnv 00:02:13.531 [Pipeline] setCustomBuildProperty 00:02:13.546 [Pipeline] stage 00:02:13.548 [Pipeline] { (Tests) 00:02:13.565 [Pipeline] sh 00:02:13.851 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:14.127 [Pipeline] sh 00:02:14.408 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:14.679 [Pipeline] timeout 00:02:14.680 Timeout set to expire in 50 min 00:02:14.682 [Pipeline] { 00:02:14.697 [Pipeline] sh 00:02:14.974 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:02:15.540 HEAD is now at 09cc66129 test/unit: add mixed busy/idle mock poller function in reactor_ut 00:02:15.552 [Pipeline] sh 00:02:15.828 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:02:16.100 [Pipeline] sh 00:02:16.378 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:16.394 [Pipeline] sh 00:02:16.672 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:02:16.672 ++ readlink -f spdk_repo 00:02:16.930 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:16.930 + [[ -n /home/vagrant/spdk_repo ]] 00:02:16.930 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:16.930 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:16.930 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:16.930 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:16.930 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:16.930 + [[ nvme-vg-autotest == pkgdep-* ]] 00:02:16.930 + cd /home/vagrant/spdk_repo 00:02:16.930 + source /etc/os-release 00:02:16.930 ++ NAME='Fedora Linux' 00:02:16.930 ++ VERSION='39 (Cloud Edition)' 00:02:16.930 ++ ID=fedora 00:02:16.930 ++ VERSION_ID=39 00:02:16.930 ++ VERSION_CODENAME= 00:02:16.930 ++ PLATFORM_ID=platform:f39 00:02:16.930 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:16.930 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:16.930 ++ LOGO=fedora-logo-icon 00:02:16.930 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:16.930 ++ HOME_URL=https://fedoraproject.org/ 00:02:16.930 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:16.930 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:16.930 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:16.930 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:16.930 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:16.930 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:16.930 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:16.930 ++ SUPPORT_END=2024-11-12 00:02:16.930 ++ VARIANT='Cloud Edition' 00:02:16.930 ++ VARIANT_ID=cloud 00:02:16.930 + uname -a 00:02:16.930 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:16.930 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:17.189 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:17.448 Hugepages 00:02:17.448 node hugesize free / total 00:02:17.448 node0 1048576kB 0 / 0 00:02:17.448 node0 2048kB 0 / 0 00:02:17.448 00:02:17.448 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:17.448 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:17.448 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:02:17.448 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:17.448 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:17.448 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:02:17.448 + rm -f /tmp/spdk-ld-path 00:02:17.448 + source autorun-spdk.conf 00:02:17.448 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.448 ++ SPDK_TEST_NVME=1 00:02:17.448 ++ SPDK_TEST_FTL=1 00:02:17.448 ++ SPDK_TEST_ISAL=1 00:02:17.448 ++ SPDK_RUN_ASAN=1 00:02:17.448 ++ SPDK_RUN_UBSAN=1 00:02:17.448 ++ SPDK_TEST_XNVME=1 00:02:17.448 ++ SPDK_TEST_NVME_FDP=1 00:02:17.448 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:17.448 ++ RUN_NIGHTLY=1 00:02:17.448 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:17.448 + [[ -n '' ]] 00:02:17.448 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:17.448 + for M in /var/spdk/build-*-manifest.txt 00:02:17.448 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:17.448 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.448 + for M in /var/spdk/build-*-manifest.txt 00:02:17.448 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:17.448 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.448 + for M in /var/spdk/build-*-manifest.txt 00:02:17.448 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:17.448 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:17.448 ++ uname 00:02:17.448 + [[ Linux == \L\i\n\u\x ]] 00:02:17.448 + sudo dmesg -T 00:02:17.448 + sudo dmesg --clear 00:02:17.448 + dmesg_pid=5034 00:02:17.448 + [[ Fedora Linux == FreeBSD ]] 00:02:17.448 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.448 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:17.448 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:17.448 + sudo dmesg -Tw 00:02:17.448 + [[ -x /usr/src/fio-static/fio ]] 00:02:17.448 + export FIO_BIN=/usr/src/fio-static/fio 00:02:17.448 + FIO_BIN=/usr/src/fio-static/fio 00:02:17.448 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:17.448 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:17.448 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:17.448 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.448 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:17.448 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:17.448 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.448 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:17.448 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:17.448 Test configuration: 00:02:17.448 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:17.448 SPDK_TEST_NVME=1 00:02:17.448 SPDK_TEST_FTL=1 00:02:17.448 SPDK_TEST_ISAL=1 00:02:17.448 SPDK_RUN_ASAN=1 00:02:17.448 SPDK_RUN_UBSAN=1 00:02:17.448 SPDK_TEST_XNVME=1 00:02:17.448 SPDK_TEST_NVME_FDP=1 00:02:17.448 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:17.707 RUN_NIGHTLY=1 03:29:10 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:17.707 03:29:10 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:17.707 03:29:10 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:17.707 03:29:10 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:17.707 03:29:10 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:17.707 03:29:10 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:17.707 03:29:10 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.707 03:29:10 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.707 03:29:10 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.707 03:29:10 -- paths/export.sh@5 -- $ export PATH 00:02:17.707 03:29:10 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:17.707 03:29:10 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:17.707 03:29:10 -- common/autobuild_common.sh@479 -- $ date +%s 00:02:17.707 03:29:10 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727753350.XXXXXX 00:02:17.707 03:29:10 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727753350.4NEzB2 00:02:17.707 03:29:10 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:02:17.707 03:29:10 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:02:17.707 03:29:10 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:17.707 03:29:10 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:17.707 03:29:10 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:17.707 03:29:10 -- common/autobuild_common.sh@495 -- $ get_config_params 00:02:17.707 03:29:10 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:17.707 03:29:10 -- common/autotest_common.sh@10 -- $ set +x 00:02:17.707 03:29:10 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:02:17.707 03:29:10 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:02:17.707 03:29:10 -- pm/common@17 -- $ local monitor 00:02:17.707 03:29:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.707 03:29:10 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:17.707 03:29:10 -- pm/common@21 -- $ date +%s 00:02:17.707 03:29:10 -- pm/common@25 -- $ sleep 1 00:02:17.707 03:29:10 -- pm/common@21 -- $ date +%s 00:02:17.707 03:29:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727753350 00:02:17.707 03:29:10 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1727753350 00:02:17.707 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727753350_collect-cpu-load.pm.log 00:02:17.707 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1727753350_collect-vmstat.pm.log 00:02:18.681 03:29:11 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:02:18.681 03:29:11 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:18.681 03:29:11 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:18.681 03:29:11 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:18.681 03:29:11 -- spdk/autobuild.sh@16 -- $ date -u 00:02:18.681 Tue Oct 1 03:29:11 AM UTC 2024 00:02:18.681 03:29:11 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:18.681 v25.01-pre-17-g09cc66129 00:02:18.681 03:29:11 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:18.681 03:29:11 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:18.681 03:29:11 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:18.681 03:29:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:18.681 03:29:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.681 ************************************ 00:02:18.681 START TEST asan 00:02:18.681 ************************************ 00:02:18.681 using asan 00:02:18.681 03:29:11 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:18.681 00:02:18.681 real 0m0.000s 00:02:18.681 user 0m0.000s 00:02:18.681 sys 0m0.000s 00:02:18.681 03:29:11 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:18.681 03:29:11 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:18.681 ************************************ 00:02:18.681 END TEST asan 00:02:18.681 ************************************ 00:02:18.681 03:29:11 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:18.681 03:29:11 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:18.681 03:29:11 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:18.681 03:29:11 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:18.681 03:29:11 -- common/autotest_common.sh@10 -- $ set +x 00:02:18.681 ************************************ 00:02:18.681 START TEST ubsan 00:02:18.681 ************************************ 00:02:18.681 using ubsan 00:02:18.681 03:29:11 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:18.681 00:02:18.681 real 0m0.000s 00:02:18.681 user 0m0.000s 00:02:18.681 sys 0m0.000s 00:02:18.681 03:29:11 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:18.681 ************************************ 00:02:18.681 END TEST ubsan 00:02:18.681 ************************************ 00:02:18.681 03:29:11 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:18.681 03:29:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:18.681 03:29:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:18.681 03:29:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:18.681 03:29:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:18.681 03:29:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:18.681 03:29:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:18.681 03:29:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:18.681 03:29:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:18.681 03:29:11 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:02:18.947 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:18.947 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:19.205 Using 'verbs' RDMA provider 00:02:30.103 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:40.085 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:40.346 Creating mk/config.mk...done. 00:02:40.346 Creating mk/cc.flags.mk...done. 00:02:40.346 Type 'make' to build. 00:02:40.346 03:29:32 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:40.346 03:29:32 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:40.346 03:29:32 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:40.346 03:29:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:40.346 ************************************ 00:02:40.346 START TEST make 00:02:40.346 ************************************ 00:02:40.346 03:29:32 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:40.606 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:40.606 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:40.606 meson setup builddir \ 00:02:40.606 -Dwith-libaio=enabled \ 00:02:40.606 -Dwith-liburing=enabled \ 00:02:40.606 -Dwith-libvfn=disabled \ 00:02:40.606 -Dwith-spdk=false && \ 00:02:40.606 meson compile -C builddir && \ 00:02:40.606 cd -) 00:02:40.606 make[1]: Nothing to be done for 'all'. 00:02:42.507 The Meson build system 00:02:42.507 Version: 1.5.0 00:02:42.507 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:42.507 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:42.507 Build type: native build 00:02:42.507 Project name: xnvme 00:02:42.507 Project version: 0.7.3 00:02:42.507 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:42.507 C linker for the host machine: cc ld.bfd 2.40-14 00:02:42.507 Host machine cpu family: x86_64 00:02:42.507 Host machine cpu: x86_64 00:02:42.507 Message: host_machine.system: linux 00:02:42.507 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:42.507 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:42.507 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:42.507 Run-time dependency threads found: YES 00:02:42.507 Has header "setupapi.h" : NO 00:02:42.507 Has header "linux/blkzoned.h" : YES 00:02:42.507 Has header "linux/blkzoned.h" : YES (cached) 00:02:42.507 Has header "libaio.h" : YES 00:02:42.507 Library aio found: YES 00:02:42.507 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:42.507 Run-time dependency liburing found: YES 2.2 00:02:42.507 Dependency libvfn skipped: feature with-libvfn disabled 00:02:42.507 Run-time dependency appleframeworks found: NO (tried framework) 00:02:42.507 Run-time dependency appleframeworks found: NO (tried framework) 00:02:42.507 Configuring xnvme_config.h using configuration 00:02:42.507 Configuring xnvme.spec using configuration 00:02:42.507 Run-time dependency bash-completion found: YES 2.11 00:02:42.507 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:42.507 Program cp found: YES (/usr/bin/cp) 00:02:42.507 Has header "winsock2.h" : NO 00:02:42.507 Has header "dbghelp.h" : NO 00:02:42.507 Library rpcrt4 found: NO 00:02:42.507 Library rt found: YES 00:02:42.507 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:42.507 Found CMake: /usr/bin/cmake (3.27.7) 00:02:42.508 Run-time dependency _spdk found: NO (tried pkgconfig and cmake) 00:02:42.508 Run-time dependency wpdk found: NO (tried pkgconfig and cmake) 00:02:42.508 Run-time dependency spdk-win found: NO (tried pkgconfig and cmake) 00:02:42.508 Build targets in project: 32 00:02:42.508 00:02:42.508 xnvme 0.7.3 00:02:42.508 00:02:42.508 User defined options 00:02:42.508 with-libaio : enabled 00:02:42.508 with-liburing: enabled 00:02:42.508 with-libvfn : disabled 00:02:42.508 with-spdk : false 00:02:42.508 00:02:42.508 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:43.074 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:43.074 [1/203] Generating toolbox/xnvme-driver-script with a custom command 00:02:43.074 [2/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd.c.o 00:02:43.074 [3/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_async.c.o 00:02:43.074 [4/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_admin_shim.c.o 00:02:43.074 [5/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_dev.c.o 00:02:43.074 [6/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_mem_posix.c.o 00:02:43.074 [7/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_nil.c.o 00:02:43.074 [8/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_emu.c.o 00:02:43.074 [9/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_sync_psync.c.o 00:02:43.074 [10/203] Compiling C object lib/libxnvme.so.p/xnvme_adm.c.o 00:02:43.074 [11/203] Compiling C object lib/libxnvme.so.p/xnvme_be_fbsd_nvme.c.o 00:02:43.074 [12/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_posix.c.o 00:02:43.333 [13/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux.c.o 00:02:43.333 [14/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos.c.o 00:02:43.333 [15/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_admin.c.o 00:02:43.333 [16/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_ucmd.c.o 00:02:43.333 [17/203] Compiling C object lib/libxnvme.so.p/xnvme_be_cbi_async_thrpool.c.o 00:02:43.333 [18/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_dev.c.o 00:02:43.333 [19/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_libaio.c.o 00:02:43.333 [20/203] Compiling C object lib/libxnvme.so.p/xnvme_be_macos_sync.c.o 00:02:43.333 [21/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_hugepage.c.o 00:02:43.333 [22/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_admin.c.o 00:02:43.333 [23/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_dev.c.o 00:02:43.333 [24/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk.c.o 00:02:43.333 [25/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_nvme.c.o 00:02:43.333 [26/203] Compiling C object lib/libxnvme.so.p/xnvme_be_nosys.c.o 00:02:43.333 [27/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_admin.c.o 00:02:43.333 [28/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk.c.o 00:02:43.333 [29/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_async.c.o 00:02:43.333 [30/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_dev.c.o 00:02:43.333 [31/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_dev.c.o 00:02:43.333 [32/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_async_liburing.c.o 00:02:43.333 [33/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_mem.c.o 00:02:43.333 [34/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_async.c.o 00:02:43.333 [35/203] Compiling C object lib/libxnvme.so.p/xnvme_be_spdk_sync.c.o 00:02:43.333 [36/203] Compiling C object lib/libxnvme.so.p/xnvme_be_linux_block.c.o 00:02:43.333 [37/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_admin.c.o 00:02:43.333 [38/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio.c.o 00:02:43.333 [39/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_dev.c.o 00:02:43.333 [40/203] Compiling C object lib/libxnvme.so.p/xnvme_be_ramdisk_sync.c.o 00:02:43.333 [41/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_mem.c.o 00:02:43.333 [42/203] Compiling C object lib/libxnvme.so.p/xnvme_be.c.o 00:02:43.333 [43/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows.c.o 00:02:43.333 [44/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_fs.c.o 00:02:43.591 [45/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp_th.c.o 00:02:43.591 [46/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_iocp.c.o 00:02:43.591 [47/203] Compiling C object lib/libxnvme.so.p/xnvme_be_vfio_sync.c.o 00:02:43.591 [48/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_async_ioring.c.o 00:02:43.591 [49/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_block.c.o 00:02:43.591 [50/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_dev.c.o 00:02:43.591 [51/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_mem.c.o 00:02:43.591 [52/203] Compiling C object lib/libxnvme.so.p/xnvme_be_windows_nvme.c.o 00:02:43.591 [53/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf_entries.c.o 00:02:43.591 [54/203] Compiling C object lib/libxnvme.so.p/xnvme_file.c.o 00:02:43.591 [55/203] Compiling C object lib/libxnvme.so.p/xnvme_geo.c.o 00:02:43.591 [56/203] Compiling C object lib/libxnvme.so.p/xnvme_cmd.c.o 00:02:43.591 [57/203] Compiling C object lib/libxnvme.so.p/xnvme_ident.c.o 00:02:43.591 [58/203] Compiling C object lib/libxnvme.so.p/xnvme_req.c.o 00:02:43.591 [59/203] Compiling C object lib/libxnvme.so.p/xnvme_dev.c.o 00:02:43.591 [60/203] Compiling C object lib/libxnvme.so.p/xnvme_libconf.c.o 00:02:43.591 [61/203] Compiling C object lib/libxnvme.so.p/xnvme_lba.c.o 00:02:43.591 [62/203] Compiling C object lib/libxnvme.so.p/xnvme_nvm.c.o 00:02:43.591 [63/203] Compiling C object lib/libxnvme.so.p/xnvme_opts.c.o 00:02:43.591 [64/203] Compiling C object lib/libxnvme.so.p/xnvme_queue.c.o 00:02:43.591 [65/203] Compiling C object lib/libxnvme.so.p/xnvme_buf.c.o 00:02:43.591 [66/203] Compiling C object lib/libxnvme.so.p/xnvme_ver.c.o 00:02:43.591 [67/203] Compiling C object lib/libxnvme.so.p/xnvme_kvs.c.o 00:02:43.591 [68/203] Compiling C object lib/libxnvme.so.p/xnvme_topology.c.o 00:02:43.849 [69/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_admin_shim.c.o 00:02:43.849 [70/203] Compiling C object lib/libxnvme.so.p/xnvme_spec_pp.c.o 00:02:43.849 [71/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_nil.c.o 00:02:43.849 [72/203] Compiling C object lib/libxnvme.a.p/xnvme_adm.c.o 00:02:43.849 [73/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_emu.c.o 00:02:43.849 [74/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_mem_posix.c.o 00:02:43.849 [75/203] Compiling C object lib/libxnvme.so.p/xnvme_cli.c.o 00:02:43.849 [76/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd.c.o 00:02:43.849 [77/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_posix.c.o 00:02:43.849 [78/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_async.c.o 00:02:43.849 [79/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_dev.c.o 00:02:43.849 [80/203] Compiling C object lib/libxnvme.a.p/xnvme_be_fbsd_nvme.c.o 00:02:43.849 [81/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_sync_psync.c.o 00:02:43.849 [82/203] Compiling C object lib/libxnvme.so.p/xnvme_znd.c.o 00:02:43.849 [83/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux.c.o 00:02:43.849 [84/203] Compiling C object lib/libxnvme.a.p/xnvme_be_cbi_async_thrpool.c.o 00:02:43.849 [85/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos.c.o 00:02:43.849 [86/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_admin.c.o 00:02:44.107 [87/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_libaio.c.o 00:02:44.107 [88/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_sync.c.o 00:02:44.107 [89/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_hugepage.c.o 00:02:44.107 [90/203] Compiling C object lib/libxnvme.a.p/xnvme_be_macos_dev.c.o 00:02:44.107 [91/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_nvme.c.o 00:02:44.107 [92/203] Compiling C object lib/libxnvme.a.p/xnvme_be.c.o 00:02:44.107 [93/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_dev.c.o 00:02:44.107 [94/203] Compiling C object lib/libxnvme.a.p/xnvme_be_nosys.c.o 00:02:44.107 [95/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_ucmd.c.o 00:02:44.107 [96/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk.c.o 00:02:44.107 [97/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk.c.o 00:02:44.107 [98/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_async.c.o 00:02:44.107 [99/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_admin.c.o 00:02:44.107 [100/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_async_liburing.c.o 00:02:44.107 [101/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_dev.c.o 00:02:44.107 [102/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_mem.c.o 00:02:44.107 [103/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_admin.c.o 00:02:44.107 [104/203] Compiling C object lib/libxnvme.a.p/xnvme_be_linux_block.c.o 00:02:44.107 [105/203] Compiling C object lib/libxnvme.a.p/xnvme_be_spdk_sync.c.o 00:02:44.107 [106/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_dev.c.o 00:02:44.107 [107/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_admin.c.o 00:02:44.107 [108/203] Compiling C object lib/libxnvme.a.p/xnvme_be_ramdisk_sync.c.o 00:02:44.107 [109/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio.c.o 00:02:44.107 [110/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows.c.o 00:02:44.108 [111/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp.c.o 00:02:44.108 [112/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_async.c.o 00:02:44.108 [113/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_sync.c.o 00:02:44.108 [114/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_mem.c.o 00:02:44.108 [115/203] Compiling C object lib/libxnvme.a.p/xnvme_be_vfio_dev.c.o 00:02:44.108 [116/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_ioring.c.o 00:02:44.108 [117/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_async_iocp_th.c.o 00:02:44.108 [118/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_fs.c.o 00:02:44.108 [119/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_block.c.o 00:02:44.108 [120/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_dev.c.o 00:02:44.366 [121/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_mem.c.o 00:02:44.366 [122/203] Compiling C object lib/libxnvme.a.p/xnvme_be_windows_nvme.c.o 00:02:44.366 [123/203] Compiling C object lib/libxnvme.so.p/xnvme_spec.c.o 00:02:44.366 [124/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf_entries.c.o 00:02:44.366 [125/203] Compiling C object lib/libxnvme.a.p/xnvme_ident.c.o 00:02:44.366 [126/203] Compiling C object lib/libxnvme.a.p/xnvme_file.c.o 00:02:44.366 [127/203] Compiling C object lib/libxnvme.a.p/xnvme_geo.c.o 00:02:44.366 [128/203] Compiling C object lib/libxnvme.a.p/xnvme_cmd.c.o 00:02:44.366 [129/203] Compiling C object lib/libxnvme.a.p/xnvme_libconf.c.o 00:02:44.366 [130/203] Compiling C object lib/libxnvme.a.p/xnvme_dev.c.o 00:02:44.366 [131/203] Linking target lib/libxnvme.so 00:02:44.366 [132/203] Compiling C object lib/libxnvme.a.p/xnvme_req.c.o 00:02:44.366 [133/203] Compiling C object lib/libxnvme.a.p/xnvme_lba.c.o 00:02:44.366 [134/203] Compiling C object lib/libxnvme.a.p/xnvme_kvs.c.o 00:02:44.366 [135/203] Compiling C object lib/libxnvme.a.p/xnvme_nvm.c.o 00:02:44.366 [136/203] Compiling C object lib/libxnvme.a.p/xnvme_buf.c.o 00:02:44.366 [137/203] Compiling C object lib/libxnvme.a.p/xnvme_queue.c.o 00:02:44.366 [138/203] Compiling C object lib/libxnvme.a.p/xnvme_ver.c.o 00:02:44.366 [139/203] Compiling C object lib/libxnvme.a.p/xnvme_opts.c.o 00:02:44.366 [140/203] Compiling C object tests/xnvme_tests_async_intf.p/async_intf.c.o 00:02:44.366 [141/203] Compiling C object tests/xnvme_tests_cli.p/cli.c.o 00:02:44.366 [142/203] Compiling C object lib/libxnvme.a.p/xnvme_spec_pp.c.o 00:02:44.366 [143/203] Compiling C object lib/libxnvme.a.p/xnvme_topology.c.o 00:02:44.624 [144/203] Compiling C object tests/xnvme_tests_buf.p/buf.c.o 00:02:44.624 [145/203] Compiling C object lib/libxnvme.a.p/xnvme_znd.c.o 00:02:44.624 [146/203] Compiling C object tests/xnvme_tests_znd_state.p/znd_state.c.o 00:02:44.624 [147/203] Compiling C object tests/xnvme_tests_xnvme_cli.p/xnvme_cli.c.o 00:02:44.624 [148/203] Compiling C object tests/xnvme_tests_enum.p/enum.c.o 00:02:44.624 [149/203] Compiling C object tests/xnvme_tests_xnvme_file.p/xnvme_file.c.o 00:02:44.624 [150/203] Compiling C object tests/xnvme_tests_scc.p/scc.c.o 00:02:44.624 [151/203] Compiling C object tests/xnvme_tests_znd_explicit_open.p/znd_explicit_open.c.o 00:02:44.624 [152/203] Compiling C object tests/xnvme_tests_znd_append.p/znd_append.c.o 00:02:44.624 [153/203] Compiling C object tests/xnvme_tests_lblk.p/lblk.c.o 00:02:44.624 [154/203] Compiling C object tests/xnvme_tests_kvs.p/kvs.c.o 00:02:44.624 [155/203] Compiling C object tests/xnvme_tests_map.p/map.c.o 00:02:44.882 [156/203] Compiling C object examples/xnvme_enum.p/xnvme_enum.c.o 00:02:44.882 [157/203] Compiling C object examples/xnvme_dev.p/xnvme_dev.c.o 00:02:44.882 [158/203] Compiling C object lib/libxnvme.a.p/xnvme_cli.c.o 00:02:44.882 [159/203] Compiling C object tools/lblk.p/lblk.c.o 00:02:44.882 [160/203] Compiling C object tests/xnvme_tests_znd_zrwa.p/znd_zrwa.c.o 00:02:44.882 [161/203] Compiling C object tests/xnvme_tests_ioworker.p/ioworker.c.o 00:02:44.882 [162/203] Compiling C object tools/kvs.p/kvs.c.o 00:02:44.882 [163/203] Compiling C object examples/xnvme_hello.p/xnvme_hello.c.o 00:02:44.882 [164/203] Compiling C object tools/zoned.p/zoned.c.o 00:02:44.882 [165/203] Compiling C object examples/xnvme_single_sync.p/xnvme_single_sync.c.o 00:02:44.882 [166/203] Compiling C object tools/xdd.p/xdd.c.o 00:02:44.882 [167/203] Compiling C object examples/xnvme_io_async.p/xnvme_io_async.c.o 00:02:44.882 [168/203] Compiling C object examples/xnvme_single_async.p/xnvme_single_async.c.o 00:02:44.882 [169/203] Compiling C object tools/xnvme_file.p/xnvme_file.c.o 00:02:44.882 [170/203] Compiling C object examples/zoned_io_sync.p/zoned_io_sync.c.o 00:02:44.882 [171/203] Compiling C object examples/zoned_io_async.p/zoned_io_async.c.o 00:02:45.140 [172/203] Compiling C object tools/xnvme.p/xnvme.c.o 00:02:45.140 [173/203] Compiling C object lib/libxnvme.a.p/xnvme_spec.c.o 00:02:45.141 [174/203] Linking static target lib/libxnvme.a 00:02:45.141 [175/203] Linking target tests/xnvme_tests_async_intf 00:02:45.141 [176/203] Linking target tests/xnvme_tests_enum 00:02:45.141 [177/203] Linking target tests/xnvme_tests_buf 00:02:45.141 [178/203] Linking target tests/xnvme_tests_cli 00:02:45.141 [179/203] Linking target tests/xnvme_tests_scc 00:02:45.141 [180/203] Linking target tests/xnvme_tests_xnvme_cli 00:02:45.141 [181/203] Linking target tests/xnvme_tests_lblk 00:02:45.141 [182/203] Linking target tests/xnvme_tests_ioworker 00:02:45.141 [183/203] Linking target tests/xnvme_tests_znd_append 00:02:45.141 [184/203] Linking target tests/xnvme_tests_xnvme_file 00:02:45.141 [185/203] Linking target tests/xnvme_tests_znd_explicit_open 00:02:45.141 [186/203] Linking target tests/xnvme_tests_kvs 00:02:45.141 [187/203] Linking target tools/xdd 00:02:45.141 [188/203] Linking target tests/xnvme_tests_znd_state 00:02:45.141 [189/203] Linking target tests/xnvme_tests_znd_zrwa 00:02:45.141 [190/203] Linking target tests/xnvme_tests_map 00:02:45.398 [191/203] Linking target tools/lblk 00:02:45.398 [192/203] Linking target tools/kvs 00:02:45.398 [193/203] Linking target tools/xnvme 00:02:45.398 [194/203] Linking target examples/xnvme_dev 00:02:45.398 [195/203] Linking target tools/xnvme_file 00:02:45.398 [196/203] Linking target tools/zoned 00:02:45.398 [197/203] Linking target examples/xnvme_io_async 00:02:45.398 [198/203] Linking target examples/zoned_io_async 00:02:45.398 [199/203] Linking target examples/xnvme_single_async 00:02:45.398 [200/203] Linking target examples/xnvme_enum 00:02:45.398 [201/203] Linking target examples/xnvme_single_sync 00:02:45.398 [202/203] Linking target examples/zoned_io_sync 00:02:45.398 [203/203] Linking target examples/xnvme_hello 00:02:45.398 INFO: autodetecting backend as ninja 00:02:45.398 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:45.398 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:51.953 The Meson build system 00:02:51.953 Version: 1.5.0 00:02:51.953 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:51.953 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:51.953 Build type: native build 00:02:51.953 Program cat found: YES (/usr/bin/cat) 00:02:51.953 Project name: DPDK 00:02:51.953 Project version: 24.03.0 00:02:51.953 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:51.953 C linker for the host machine: cc ld.bfd 2.40-14 00:02:51.953 Host machine cpu family: x86_64 00:02:51.953 Host machine cpu: x86_64 00:02:51.953 Message: ## Building in Developer Mode ## 00:02:51.953 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:51.953 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:51.953 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:51.953 Program python3 found: YES (/usr/bin/python3) 00:02:51.953 Program cat found: YES (/usr/bin/cat) 00:02:51.953 Compiler for C supports arguments -march=native: YES 00:02:51.953 Checking for size of "void *" : 8 00:02:51.953 Checking for size of "void *" : 8 (cached) 00:02:51.953 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:51.953 Library m found: YES 00:02:51.953 Library numa found: YES 00:02:51.953 Has header "numaif.h" : YES 00:02:51.953 Library fdt found: NO 00:02:51.953 Library execinfo found: NO 00:02:51.953 Has header "execinfo.h" : YES 00:02:51.953 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:51.953 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:51.953 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:51.953 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:51.953 Run-time dependency openssl found: YES 3.1.1 00:02:51.953 Run-time dependency libpcap found: YES 1.10.4 00:02:51.953 Has header "pcap.h" with dependency libpcap: YES 00:02:51.953 Compiler for C supports arguments -Wcast-qual: YES 00:02:51.953 Compiler for C supports arguments -Wdeprecated: YES 00:02:51.953 Compiler for C supports arguments -Wformat: YES 00:02:51.953 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:51.953 Compiler for C supports arguments -Wformat-security: NO 00:02:51.953 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:51.953 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:51.953 Compiler for C supports arguments -Wnested-externs: YES 00:02:51.953 Compiler for C supports arguments -Wold-style-definition: YES 00:02:51.953 Compiler for C supports arguments -Wpointer-arith: YES 00:02:51.953 Compiler for C supports arguments -Wsign-compare: YES 00:02:51.953 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:51.953 Compiler for C supports arguments -Wundef: YES 00:02:51.953 Compiler for C supports arguments -Wwrite-strings: YES 00:02:51.953 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:51.953 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:51.953 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:51.953 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:51.953 Program objdump found: YES (/usr/bin/objdump) 00:02:51.953 Compiler for C supports arguments -mavx512f: YES 00:02:51.953 Checking if "AVX512 checking" compiles: YES 00:02:51.953 Fetching value of define "__SSE4_2__" : 1 00:02:51.953 Fetching value of define "__AES__" : 1 00:02:51.953 Fetching value of define "__AVX__" : 1 00:02:51.953 Fetching value of define "__AVX2__" : 1 00:02:51.953 Fetching value of define "__AVX512BW__" : 1 00:02:51.953 Fetching value of define "__AVX512CD__" : 1 00:02:51.953 Fetching value of define "__AVX512DQ__" : 1 00:02:51.953 Fetching value of define "__AVX512F__" : 1 00:02:51.953 Fetching value of define "__AVX512VL__" : 1 00:02:51.953 Fetching value of define "__PCLMUL__" : 1 00:02:51.953 Fetching value of define "__RDRND__" : 1 00:02:51.953 Fetching value of define "__RDSEED__" : 1 00:02:51.953 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:51.953 Fetching value of define "__znver1__" : (undefined) 00:02:51.953 Fetching value of define "__znver2__" : (undefined) 00:02:51.953 Fetching value of define "__znver3__" : (undefined) 00:02:51.953 Fetching value of define "__znver4__" : (undefined) 00:02:51.953 Library asan found: YES 00:02:51.953 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:51.953 Message: lib/log: Defining dependency "log" 00:02:51.953 Message: lib/kvargs: Defining dependency "kvargs" 00:02:51.953 Message: lib/telemetry: Defining dependency "telemetry" 00:02:51.953 Library rt found: YES 00:02:51.953 Checking for function "getentropy" : NO 00:02:51.953 Message: lib/eal: Defining dependency "eal" 00:02:51.953 Message: lib/ring: Defining dependency "ring" 00:02:51.953 Message: lib/rcu: Defining dependency "rcu" 00:02:51.953 Message: lib/mempool: Defining dependency "mempool" 00:02:51.953 Message: lib/mbuf: Defining dependency "mbuf" 00:02:51.953 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:51.953 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:51.953 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:51.953 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:51.953 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:51.954 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:51.954 Compiler for C supports arguments -mpclmul: YES 00:02:51.954 Compiler for C supports arguments -maes: YES 00:02:51.954 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:51.954 Compiler for C supports arguments -mavx512bw: YES 00:02:51.954 Compiler for C supports arguments -mavx512dq: YES 00:02:51.954 Compiler for C supports arguments -mavx512vl: YES 00:02:51.954 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:51.954 Compiler for C supports arguments -mavx2: YES 00:02:51.954 Compiler for C supports arguments -mavx: YES 00:02:51.954 Message: lib/net: Defining dependency "net" 00:02:51.954 Message: lib/meter: Defining dependency "meter" 00:02:51.954 Message: lib/ethdev: Defining dependency "ethdev" 00:02:51.954 Message: lib/pci: Defining dependency "pci" 00:02:51.954 Message: lib/cmdline: Defining dependency "cmdline" 00:02:51.954 Message: lib/hash: Defining dependency "hash" 00:02:51.954 Message: lib/timer: Defining dependency "timer" 00:02:51.954 Message: lib/compressdev: Defining dependency "compressdev" 00:02:51.954 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:51.954 Message: lib/dmadev: Defining dependency "dmadev" 00:02:51.954 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:51.954 Message: lib/power: Defining dependency "power" 00:02:51.954 Message: lib/reorder: Defining dependency "reorder" 00:02:51.954 Message: lib/security: Defining dependency "security" 00:02:51.954 Has header "linux/userfaultfd.h" : YES 00:02:51.954 Has header "linux/vduse.h" : YES 00:02:51.954 Message: lib/vhost: Defining dependency "vhost" 00:02:51.954 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:51.954 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:51.954 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:51.954 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:51.954 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:51.954 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:51.954 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:51.954 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:51.954 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:51.954 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:51.954 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:51.954 Configuring doxy-api-html.conf using configuration 00:02:51.954 Configuring doxy-api-man.conf using configuration 00:02:51.954 Program mandb found: YES (/usr/bin/mandb) 00:02:51.954 Program sphinx-build found: NO 00:02:51.954 Configuring rte_build_config.h using configuration 00:02:51.954 Message: 00:02:51.954 ================= 00:02:51.954 Applications Enabled 00:02:51.954 ================= 00:02:51.954 00:02:51.954 apps: 00:02:51.954 00:02:51.954 00:02:51.954 Message: 00:02:51.954 ================= 00:02:51.954 Libraries Enabled 00:02:51.954 ================= 00:02:51.954 00:02:51.954 libs: 00:02:51.954 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:51.954 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:51.954 cryptodev, dmadev, power, reorder, security, vhost, 00:02:51.954 00:02:51.954 Message: 00:02:51.954 =============== 00:02:51.954 Drivers Enabled 00:02:51.954 =============== 00:02:51.954 00:02:51.954 common: 00:02:51.954 00:02:51.954 bus: 00:02:51.954 pci, vdev, 00:02:51.954 mempool: 00:02:51.954 ring, 00:02:51.954 dma: 00:02:51.954 00:02:51.954 net: 00:02:51.954 00:02:51.954 crypto: 00:02:51.954 00:02:51.954 compress: 00:02:51.954 00:02:51.954 vdpa: 00:02:51.954 00:02:51.954 00:02:51.954 Message: 00:02:51.954 ================= 00:02:51.954 Content Skipped 00:02:51.954 ================= 00:02:51.954 00:02:51.954 apps: 00:02:51.954 dumpcap: explicitly disabled via build config 00:02:51.954 graph: explicitly disabled via build config 00:02:51.954 pdump: explicitly disabled via build config 00:02:51.954 proc-info: explicitly disabled via build config 00:02:51.954 test-acl: explicitly disabled via build config 00:02:51.954 test-bbdev: explicitly disabled via build config 00:02:51.954 test-cmdline: explicitly disabled via build config 00:02:51.954 test-compress-perf: explicitly disabled via build config 00:02:51.954 test-crypto-perf: explicitly disabled via build config 00:02:51.954 test-dma-perf: explicitly disabled via build config 00:02:51.954 test-eventdev: explicitly disabled via build config 00:02:51.954 test-fib: explicitly disabled via build config 00:02:51.954 test-flow-perf: explicitly disabled via build config 00:02:51.954 test-gpudev: explicitly disabled via build config 00:02:51.954 test-mldev: explicitly disabled via build config 00:02:51.954 test-pipeline: explicitly disabled via build config 00:02:51.954 test-pmd: explicitly disabled via build config 00:02:51.954 test-regex: explicitly disabled via build config 00:02:51.954 test-sad: explicitly disabled via build config 00:02:51.954 test-security-perf: explicitly disabled via build config 00:02:51.954 00:02:51.954 libs: 00:02:51.954 argparse: explicitly disabled via build config 00:02:51.954 metrics: explicitly disabled via build config 00:02:51.954 acl: explicitly disabled via build config 00:02:51.954 bbdev: explicitly disabled via build config 00:02:51.954 bitratestats: explicitly disabled via build config 00:02:51.954 bpf: explicitly disabled via build config 00:02:51.954 cfgfile: explicitly disabled via build config 00:02:51.954 distributor: explicitly disabled via build config 00:02:51.954 efd: explicitly disabled via build config 00:02:51.954 eventdev: explicitly disabled via build config 00:02:51.954 dispatcher: explicitly disabled via build config 00:02:51.954 gpudev: explicitly disabled via build config 00:02:51.954 gro: explicitly disabled via build config 00:02:51.955 gso: explicitly disabled via build config 00:02:51.955 ip_frag: explicitly disabled via build config 00:02:51.955 jobstats: explicitly disabled via build config 00:02:51.955 latencystats: explicitly disabled via build config 00:02:51.955 lpm: explicitly disabled via build config 00:02:51.955 member: explicitly disabled via build config 00:02:51.955 pcapng: explicitly disabled via build config 00:02:51.955 rawdev: explicitly disabled via build config 00:02:51.955 regexdev: explicitly disabled via build config 00:02:51.955 mldev: explicitly disabled via build config 00:02:51.955 rib: explicitly disabled via build config 00:02:51.955 sched: explicitly disabled via build config 00:02:51.955 stack: explicitly disabled via build config 00:02:51.955 ipsec: explicitly disabled via build config 00:02:51.955 pdcp: explicitly disabled via build config 00:02:51.955 fib: explicitly disabled via build config 00:02:51.955 port: explicitly disabled via build config 00:02:51.955 pdump: explicitly disabled via build config 00:02:51.955 table: explicitly disabled via build config 00:02:51.955 pipeline: explicitly disabled via build config 00:02:51.955 graph: explicitly disabled via build config 00:02:51.955 node: explicitly disabled via build config 00:02:51.955 00:02:51.955 drivers: 00:02:51.955 common/cpt: not in enabled drivers build config 00:02:51.955 common/dpaax: not in enabled drivers build config 00:02:51.955 common/iavf: not in enabled drivers build config 00:02:51.955 common/idpf: not in enabled drivers build config 00:02:51.955 common/ionic: not in enabled drivers build config 00:02:51.955 common/mvep: not in enabled drivers build config 00:02:51.955 common/octeontx: not in enabled drivers build config 00:02:51.955 bus/auxiliary: not in enabled drivers build config 00:02:51.955 bus/cdx: not in enabled drivers build config 00:02:51.955 bus/dpaa: not in enabled drivers build config 00:02:51.955 bus/fslmc: not in enabled drivers build config 00:02:51.955 bus/ifpga: not in enabled drivers build config 00:02:51.955 bus/platform: not in enabled drivers build config 00:02:51.955 bus/uacce: not in enabled drivers build config 00:02:51.955 bus/vmbus: not in enabled drivers build config 00:02:51.955 common/cnxk: not in enabled drivers build config 00:02:51.955 common/mlx5: not in enabled drivers build config 00:02:51.955 common/nfp: not in enabled drivers build config 00:02:51.955 common/nitrox: not in enabled drivers build config 00:02:51.955 common/qat: not in enabled drivers build config 00:02:51.955 common/sfc_efx: not in enabled drivers build config 00:02:51.955 mempool/bucket: not in enabled drivers build config 00:02:51.955 mempool/cnxk: not in enabled drivers build config 00:02:51.955 mempool/dpaa: not in enabled drivers build config 00:02:51.955 mempool/dpaa2: not in enabled drivers build config 00:02:51.955 mempool/octeontx: not in enabled drivers build config 00:02:51.955 mempool/stack: not in enabled drivers build config 00:02:51.955 dma/cnxk: not in enabled drivers build config 00:02:51.955 dma/dpaa: not in enabled drivers build config 00:02:51.955 dma/dpaa2: not in enabled drivers build config 00:02:51.955 dma/hisilicon: not in enabled drivers build config 00:02:51.955 dma/idxd: not in enabled drivers build config 00:02:51.955 dma/ioat: not in enabled drivers build config 00:02:51.955 dma/skeleton: not in enabled drivers build config 00:02:51.955 net/af_packet: not in enabled drivers build config 00:02:51.955 net/af_xdp: not in enabled drivers build config 00:02:51.955 net/ark: not in enabled drivers build config 00:02:51.955 net/atlantic: not in enabled drivers build config 00:02:51.955 net/avp: not in enabled drivers build config 00:02:51.955 net/axgbe: not in enabled drivers build config 00:02:51.955 net/bnx2x: not in enabled drivers build config 00:02:51.955 net/bnxt: not in enabled drivers build config 00:02:51.955 net/bonding: not in enabled drivers build config 00:02:51.955 net/cnxk: not in enabled drivers build config 00:02:51.955 net/cpfl: not in enabled drivers build config 00:02:51.955 net/cxgbe: not in enabled drivers build config 00:02:51.955 net/dpaa: not in enabled drivers build config 00:02:51.955 net/dpaa2: not in enabled drivers build config 00:02:51.955 net/e1000: not in enabled drivers build config 00:02:51.955 net/ena: not in enabled drivers build config 00:02:51.955 net/enetc: not in enabled drivers build config 00:02:51.955 net/enetfec: not in enabled drivers build config 00:02:51.955 net/enic: not in enabled drivers build config 00:02:51.955 net/failsafe: not in enabled drivers build config 00:02:51.955 net/fm10k: not in enabled drivers build config 00:02:51.955 net/gve: not in enabled drivers build config 00:02:51.955 net/hinic: not in enabled drivers build config 00:02:51.955 net/hns3: not in enabled drivers build config 00:02:51.955 net/i40e: not in enabled drivers build config 00:02:51.955 net/iavf: not in enabled drivers build config 00:02:51.955 net/ice: not in enabled drivers build config 00:02:51.955 net/idpf: not in enabled drivers build config 00:02:51.955 net/igc: not in enabled drivers build config 00:02:51.955 net/ionic: not in enabled drivers build config 00:02:51.955 net/ipn3ke: not in enabled drivers build config 00:02:51.955 net/ixgbe: not in enabled drivers build config 00:02:51.955 net/mana: not in enabled drivers build config 00:02:51.955 net/memif: not in enabled drivers build config 00:02:51.955 net/mlx4: not in enabled drivers build config 00:02:51.955 net/mlx5: not in enabled drivers build config 00:02:51.955 net/mvneta: not in enabled drivers build config 00:02:51.955 net/mvpp2: not in enabled drivers build config 00:02:51.955 net/netvsc: not in enabled drivers build config 00:02:51.955 net/nfb: not in enabled drivers build config 00:02:51.955 net/nfp: not in enabled drivers build config 00:02:51.955 net/ngbe: not in enabled drivers build config 00:02:51.955 net/null: not in enabled drivers build config 00:02:51.955 net/octeontx: not in enabled drivers build config 00:02:51.955 net/octeon_ep: not in enabled drivers build config 00:02:51.955 net/pcap: not in enabled drivers build config 00:02:51.955 net/pfe: not in enabled drivers build config 00:02:51.955 net/qede: not in enabled drivers build config 00:02:51.955 net/ring: not in enabled drivers build config 00:02:51.955 net/sfc: not in enabled drivers build config 00:02:51.955 net/softnic: not in enabled drivers build config 00:02:51.955 net/tap: not in enabled drivers build config 00:02:51.955 net/thunderx: not in enabled drivers build config 00:02:51.955 net/txgbe: not in enabled drivers build config 00:02:51.955 net/vdev_netvsc: not in enabled drivers build config 00:02:51.955 net/vhost: not in enabled drivers build config 00:02:51.955 net/virtio: not in enabled drivers build config 00:02:51.955 net/vmxnet3: not in enabled drivers build config 00:02:51.955 raw/*: missing internal dependency, "rawdev" 00:02:51.955 crypto/armv8: not in enabled drivers build config 00:02:51.955 crypto/bcmfs: not in enabled drivers build config 00:02:51.956 crypto/caam_jr: not in enabled drivers build config 00:02:51.956 crypto/ccp: not in enabled drivers build config 00:02:51.956 crypto/cnxk: not in enabled drivers build config 00:02:51.956 crypto/dpaa_sec: not in enabled drivers build config 00:02:51.956 crypto/dpaa2_sec: not in enabled drivers build config 00:02:51.956 crypto/ipsec_mb: not in enabled drivers build config 00:02:51.956 crypto/mlx5: not in enabled drivers build config 00:02:51.956 crypto/mvsam: not in enabled drivers build config 00:02:51.956 crypto/nitrox: not in enabled drivers build config 00:02:51.956 crypto/null: not in enabled drivers build config 00:02:51.956 crypto/octeontx: not in enabled drivers build config 00:02:51.956 crypto/openssl: not in enabled drivers build config 00:02:51.956 crypto/scheduler: not in enabled drivers build config 00:02:51.956 crypto/uadk: not in enabled drivers build config 00:02:51.956 crypto/virtio: not in enabled drivers build config 00:02:51.956 compress/isal: not in enabled drivers build config 00:02:51.956 compress/mlx5: not in enabled drivers build config 00:02:51.956 compress/nitrox: not in enabled drivers build config 00:02:51.956 compress/octeontx: not in enabled drivers build config 00:02:51.956 compress/zlib: not in enabled drivers build config 00:02:51.956 regex/*: missing internal dependency, "regexdev" 00:02:51.956 ml/*: missing internal dependency, "mldev" 00:02:51.956 vdpa/ifc: not in enabled drivers build config 00:02:51.956 vdpa/mlx5: not in enabled drivers build config 00:02:51.956 vdpa/nfp: not in enabled drivers build config 00:02:51.956 vdpa/sfc: not in enabled drivers build config 00:02:51.956 event/*: missing internal dependency, "eventdev" 00:02:51.956 baseband/*: missing internal dependency, "bbdev" 00:02:51.956 gpu/*: missing internal dependency, "gpudev" 00:02:51.956 00:02:51.956 00:02:51.956 Build targets in project: 84 00:02:51.956 00:02:51.956 DPDK 24.03.0 00:02:51.956 00:02:51.956 User defined options 00:02:51.956 buildtype : debug 00:02:51.956 default_library : shared 00:02:51.956 libdir : lib 00:02:51.956 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:51.956 b_sanitize : address 00:02:51.956 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:51.956 c_link_args : 00:02:51.956 cpu_instruction_set: native 00:02:51.956 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:51.956 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:51.956 enable_docs : false 00:02:51.956 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:51.956 enable_kmods : false 00:02:51.956 max_lcores : 128 00:02:51.956 tests : false 00:02:51.956 00:02:51.956 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:52.522 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:52.522 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:52.522 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:52.522 [3/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:52.522 [4/267] Linking static target lib/librte_kvargs.a 00:02:52.522 [5/267] Linking static target lib/librte_log.a 00:02:52.522 [6/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:52.780 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:52.780 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:52.780 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:52.780 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:52.780 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:52.780 [12/267] Linking static target lib/librte_telemetry.a 00:02:52.780 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:52.780 [14/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:52.780 [15/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:52.780 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:52.780 [17/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.780 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:53.038 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:53.295 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:53.295 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:53.295 [22/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.295 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:53.295 [24/267] Linking target lib/librte_log.so.24.1 00:02:53.295 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:53.295 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:53.295 [27/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:53.295 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:53.295 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:53.295 [30/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:53.553 [31/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.553 [32/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:53.553 [33/267] Linking target lib/librte_kvargs.so.24.1 00:02:53.553 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:53.553 [35/267] Linking target lib/librte_telemetry.so.24.1 00:02:53.810 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:53.810 [37/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:53.810 [38/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:53.810 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:53.810 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:53.810 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:53.810 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:53.810 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:53.810 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:53.810 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:53.810 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:53.810 [47/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:53.810 [48/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:54.068 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:54.068 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:54.068 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:54.068 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:54.068 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:54.068 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:54.068 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:54.068 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:54.068 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:54.326 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:54.326 [59/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:54.326 [60/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:54.326 [61/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:54.326 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:54.326 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:54.326 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:54.584 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:54.584 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:54.584 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:54.584 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:54.584 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:54.584 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:54.584 [71/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:54.584 [72/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:54.842 [73/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:54.842 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:54.842 [75/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:54.842 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:54.842 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:54.842 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:54.842 [79/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:54.842 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:54.842 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:54.842 [82/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:55.099 [83/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:55.099 [84/267] Linking static target lib/librte_ring.a 00:02:55.099 [85/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:55.099 [86/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:55.099 [87/267] Linking static target lib/librte_eal.a 00:02:55.099 [88/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:55.099 [89/267] Linking static target lib/librte_rcu.a 00:02:55.099 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:55.357 [91/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:55.357 [92/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:55.357 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:55.357 [94/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:55.357 [95/267] Linking static target lib/librte_mempool.a 00:02:55.357 [96/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:55.357 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.614 [98/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:55.614 [99/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:55.614 [100/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:55.614 [101/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.614 [102/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:55.615 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:55.615 [104/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:55.615 [105/267] Linking static target lib/librte_mbuf.a 00:02:55.615 [106/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:55.872 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:55.872 [108/267] Linking static target lib/librte_meter.a 00:02:55.872 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:55.872 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:55.872 [111/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:56.130 [112/267] Linking static target lib/librte_net.a 00:02:56.130 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:56.130 [114/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.130 [115/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:56.387 [116/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:56.387 [117/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.387 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:56.387 [119/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.644 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:56.644 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.644 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:56.901 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:56.901 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:56.901 [125/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:56.901 [126/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:56.901 [127/267] Linking static target lib/librte_pci.a 00:02:56.901 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:56.901 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:57.158 [130/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:57.158 [131/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:57.158 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:57.158 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:57.158 [134/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:57.158 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:57.158 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:57.158 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:57.158 [138/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.158 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:57.158 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:57.158 [141/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:57.158 [142/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:57.158 [143/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:57.415 [144/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:57.415 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:57.415 [146/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:57.415 [147/267] Linking static target lib/librte_cmdline.a 00:02:57.672 [148/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:57.672 [149/267] Linking static target lib/librte_timer.a 00:02:57.672 [150/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:57.672 [151/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:57.672 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:57.672 [153/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:57.930 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:57.930 [155/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:57.930 [156/267] Linking static target lib/librte_ethdev.a 00:02:57.930 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:58.187 [158/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:58.187 [159/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:58.187 [160/267] Linking static target lib/librte_compressdev.a 00:02:58.187 [161/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.187 [162/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:58.187 [163/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:58.187 [164/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:58.187 [165/267] Linking static target lib/librte_hash.a 00:02:58.187 [166/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:58.187 [167/267] Linking static target lib/librte_dmadev.a 00:02:58.187 [168/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:58.445 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:58.445 [170/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:58.445 [171/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:58.703 [172/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:58.703 [173/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.703 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.960 [175/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:58.960 [176/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:58.960 [177/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.960 [178/267] Linking static target lib/librte_cryptodev.a 00:02:58.960 [179/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:58.960 [180/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:58.960 [181/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:58.960 [182/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:58.960 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.255 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:59.255 [185/267] Linking static target lib/librte_power.a 00:02:59.255 [186/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:59.529 [187/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:59.529 [188/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:59.529 [189/267] Linking static target lib/librte_security.a 00:02:59.529 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:59.529 [191/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:59.529 [192/267] Linking static target lib/librte_reorder.a 00:02:59.787 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:00.044 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.044 [195/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.044 [196/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:00.044 [197/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:00.302 [198/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:00.302 [199/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.560 [200/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:00.560 [201/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:00.560 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:00.560 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:00.560 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:00.561 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:00.818 [206/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:00.818 [207/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:00.818 [208/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.818 [209/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:00.818 [210/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:01.077 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:01.077 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.077 [213/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:01.077 [214/267] Linking static target drivers/librte_bus_vdev.a 00:03:01.077 [215/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:01.077 [216/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:01.077 [217/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:01.077 [218/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.077 [219/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:01.077 [220/267] Linking static target drivers/librte_bus_pci.a 00:03:01.077 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:01.335 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.335 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:01.335 [224/267] Linking static target drivers/librte_mempool_ring.a 00:03:01.335 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.594 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.594 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:02.969 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.969 [229/267] Linking target lib/librte_eal.so.24.1 00:03:02.969 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:02.969 [231/267] Linking target lib/librte_dmadev.so.24.1 00:03:02.969 [232/267] Linking target lib/librte_ring.so.24.1 00:03:02.969 [233/267] Linking target lib/librte_meter.so.24.1 00:03:02.969 [234/267] Linking target lib/librte_timer.so.24.1 00:03:02.969 [235/267] Linking target lib/librte_pci.so.24.1 00:03:02.969 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:03:02.969 [237/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:03.228 [238/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:03.228 [239/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:03.228 [240/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:03.228 [241/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:03.228 [242/267] Linking target lib/librte_rcu.so.24.1 00:03:03.228 [243/267] Linking target lib/librte_mempool.so.24.1 00:03:03.228 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:03:03.228 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:03.228 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:03.228 [247/267] Linking target drivers/librte_mempool_ring.so.24.1 00:03:03.228 [248/267] Linking target lib/librte_mbuf.so.24.1 00:03:03.486 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:03.486 [250/267] Linking target lib/librte_net.so.24.1 00:03:03.486 [251/267] Linking target lib/librte_compressdev.so.24.1 00:03:03.486 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:03:03.486 [253/267] Linking target lib/librte_reorder.so.24.1 00:03:03.486 [254/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.486 [255/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:03.486 [256/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:03.744 [257/267] Linking target lib/librte_cmdline.so.24.1 00:03:03.744 [258/267] Linking target lib/librte_hash.so.24.1 00:03:03.744 [259/267] Linking target lib/librte_security.so.24.1 00:03:03.744 [260/267] Linking target lib/librte_ethdev.so.24.1 00:03:03.744 [261/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:03.744 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:03.744 [263/267] Linking target lib/librte_power.so.24.1 00:03:05.117 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:05.117 [265/267] Linking static target lib/librte_vhost.a 00:03:06.051 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.309 [267/267] Linking target lib/librte_vhost.so.24.1 00:03:06.309 INFO: autodetecting backend as ninja 00:03:06.309 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:21.174 CC lib/ut/ut.o 00:03:21.174 CC lib/ut_mock/mock.o 00:03:21.174 CC lib/log/log_flags.o 00:03:21.174 CC lib/log/log_deprecated.o 00:03:21.174 CC lib/log/log.o 00:03:21.174 LIB libspdk_log.a 00:03:21.174 LIB libspdk_ut.a 00:03:21.174 LIB libspdk_ut_mock.a 00:03:21.174 SO libspdk_ut.so.2.0 00:03:21.174 SO libspdk_log.so.7.0 00:03:21.174 SO libspdk_ut_mock.so.6.0 00:03:21.174 SYMLINK libspdk_ut.so 00:03:21.174 SYMLINK libspdk_ut_mock.so 00:03:21.174 SYMLINK libspdk_log.so 00:03:21.174 CC lib/util/base64.o 00:03:21.174 CC lib/dma/dma.o 00:03:21.174 CC lib/util/bit_array.o 00:03:21.174 CC lib/ioat/ioat.o 00:03:21.174 CC lib/util/cpuset.o 00:03:21.174 CXX lib/trace_parser/trace.o 00:03:21.174 CC lib/util/crc32.o 00:03:21.174 CC lib/util/crc32c.o 00:03:21.174 CC lib/util/crc16.o 00:03:21.174 CC lib/vfio_user/host/vfio_user_pci.o 00:03:21.174 CC lib/util/crc32_ieee.o 00:03:21.174 CC lib/util/crc64.o 00:03:21.174 CC lib/util/dif.o 00:03:21.174 CC lib/util/fd.o 00:03:21.174 CC lib/util/fd_group.o 00:03:21.174 LIB libspdk_dma.a 00:03:21.174 CC lib/util/file.o 00:03:21.174 SO libspdk_dma.so.5.0 00:03:21.174 CC lib/vfio_user/host/vfio_user.o 00:03:21.174 CC lib/util/hexlify.o 00:03:21.174 SYMLINK libspdk_dma.so 00:03:21.174 CC lib/util/iov.o 00:03:21.174 LIB libspdk_ioat.a 00:03:21.174 CC lib/util/math.o 00:03:21.174 SO libspdk_ioat.so.7.0 00:03:21.174 CC lib/util/net.o 00:03:21.174 SYMLINK libspdk_ioat.so 00:03:21.174 CC lib/util/pipe.o 00:03:21.174 CC lib/util/strerror_tls.o 00:03:21.174 CC lib/util/string.o 00:03:21.174 CC lib/util/uuid.o 00:03:21.175 CC lib/util/xor.o 00:03:21.175 CC lib/util/zipf.o 00:03:21.175 LIB libspdk_vfio_user.a 00:03:21.175 SO libspdk_vfio_user.so.5.0 00:03:21.175 CC lib/util/md5.o 00:03:21.175 SYMLINK libspdk_vfio_user.so 00:03:21.175 LIB libspdk_util.a 00:03:21.175 SO libspdk_util.so.10.0 00:03:21.175 LIB libspdk_trace_parser.a 00:03:21.175 SO libspdk_trace_parser.so.6.0 00:03:21.175 SYMLINK libspdk_util.so 00:03:21.175 SYMLINK libspdk_trace_parser.so 00:03:21.175 CC lib/env_dpdk/env.o 00:03:21.175 CC lib/json/json_parse.o 00:03:21.175 CC lib/env_dpdk/memory.o 00:03:21.175 CC lib/rdma_provider/common.o 00:03:21.175 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:21.175 CC lib/json/json_util.o 00:03:21.175 CC lib/conf/conf.o 00:03:21.175 CC lib/rdma_utils/rdma_utils.o 00:03:21.175 CC lib/vmd/vmd.o 00:03:21.175 CC lib/idxd/idxd.o 00:03:21.175 CC lib/idxd/idxd_user.o 00:03:21.175 LIB libspdk_rdma_provider.a 00:03:21.175 SO libspdk_rdma_provider.so.6.0 00:03:21.175 LIB libspdk_conf.a 00:03:21.175 CC lib/idxd/idxd_kernel.o 00:03:21.175 SO libspdk_conf.so.6.0 00:03:21.175 CC lib/json/json_write.o 00:03:21.175 SYMLINK libspdk_rdma_provider.so 00:03:21.175 CC lib/env_dpdk/pci.o 00:03:21.175 LIB libspdk_rdma_utils.a 00:03:21.175 SYMLINK libspdk_conf.so 00:03:21.175 CC lib/env_dpdk/init.o 00:03:21.175 SO libspdk_rdma_utils.so.1.0 00:03:21.175 CC lib/env_dpdk/threads.o 00:03:21.175 SYMLINK libspdk_rdma_utils.so 00:03:21.175 CC lib/vmd/led.o 00:03:21.175 CC lib/env_dpdk/pci_ioat.o 00:03:21.175 CC lib/env_dpdk/pci_virtio.o 00:03:21.433 CC lib/env_dpdk/pci_vmd.o 00:03:21.433 CC lib/env_dpdk/pci_idxd.o 00:03:21.433 LIB libspdk_json.a 00:03:21.433 SO libspdk_json.so.6.0 00:03:21.433 CC lib/env_dpdk/pci_event.o 00:03:21.433 CC lib/env_dpdk/sigbus_handler.o 00:03:21.433 CC lib/env_dpdk/pci_dpdk.o 00:03:21.433 SYMLINK libspdk_json.so 00:03:21.433 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:21.433 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:21.433 LIB libspdk_idxd.a 00:03:21.433 LIB libspdk_vmd.a 00:03:21.433 SO libspdk_idxd.so.12.1 00:03:21.433 SO libspdk_vmd.so.6.0 00:03:21.691 SYMLINK libspdk_idxd.so 00:03:21.691 SYMLINK libspdk_vmd.so 00:03:21.691 CC lib/jsonrpc/jsonrpc_client.o 00:03:21.691 CC lib/jsonrpc/jsonrpc_server.o 00:03:21.691 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:21.691 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:21.949 LIB libspdk_jsonrpc.a 00:03:21.949 SO libspdk_jsonrpc.so.6.0 00:03:21.949 SYMLINK libspdk_jsonrpc.so 00:03:22.207 LIB libspdk_env_dpdk.a 00:03:22.207 CC lib/rpc/rpc.o 00:03:22.207 SO libspdk_env_dpdk.so.15.0 00:03:22.465 SYMLINK libspdk_env_dpdk.so 00:03:22.465 LIB libspdk_rpc.a 00:03:22.465 SO libspdk_rpc.so.6.0 00:03:22.465 SYMLINK libspdk_rpc.so 00:03:22.723 CC lib/trace/trace.o 00:03:22.723 CC lib/trace/trace_flags.o 00:03:22.723 CC lib/trace/trace_rpc.o 00:03:22.723 CC lib/notify/notify.o 00:03:22.723 CC lib/notify/notify_rpc.o 00:03:22.723 CC lib/keyring/keyring.o 00:03:22.723 CC lib/keyring/keyring_rpc.o 00:03:22.723 LIB libspdk_notify.a 00:03:22.723 SO libspdk_notify.so.6.0 00:03:22.982 SYMLINK libspdk_notify.so 00:03:22.982 LIB libspdk_trace.a 00:03:22.982 LIB libspdk_keyring.a 00:03:22.982 SO libspdk_trace.so.11.0 00:03:22.982 SO libspdk_keyring.so.2.0 00:03:22.982 SYMLINK libspdk_trace.so 00:03:22.982 SYMLINK libspdk_keyring.so 00:03:23.240 CC lib/sock/sock.o 00:03:23.240 CC lib/sock/sock_rpc.o 00:03:23.240 CC lib/thread/thread.o 00:03:23.240 CC lib/thread/iobuf.o 00:03:23.499 LIB libspdk_sock.a 00:03:23.499 SO libspdk_sock.so.10.0 00:03:23.756 SYMLINK libspdk_sock.so 00:03:24.013 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:24.013 CC lib/nvme/nvme_ctrlr.o 00:03:24.013 CC lib/nvme/nvme_ns.o 00:03:24.013 CC lib/nvme/nvme_fabric.o 00:03:24.013 CC lib/nvme/nvme_pcie.o 00:03:24.013 CC lib/nvme/nvme_ns_cmd.o 00:03:24.013 CC lib/nvme/nvme_pcie_common.o 00:03:24.013 CC lib/nvme/nvme_qpair.o 00:03:24.013 CC lib/nvme/nvme.o 00:03:24.649 CC lib/nvme/nvme_quirks.o 00:03:24.649 CC lib/nvme/nvme_transport.o 00:03:24.649 CC lib/nvme/nvme_discovery.o 00:03:24.649 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:24.649 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:24.649 CC lib/nvme/nvme_tcp.o 00:03:24.649 CC lib/nvme/nvme_opal.o 00:03:24.649 LIB libspdk_thread.a 00:03:24.649 SO libspdk_thread.so.10.1 00:03:24.649 SYMLINK libspdk_thread.so 00:03:24.649 CC lib/nvme/nvme_io_msg.o 00:03:24.906 CC lib/nvme/nvme_poll_group.o 00:03:24.906 CC lib/nvme/nvme_zns.o 00:03:24.906 CC lib/nvme/nvme_stubs.o 00:03:25.164 CC lib/nvme/nvme_auth.o 00:03:25.164 CC lib/nvme/nvme_cuse.o 00:03:25.164 CC lib/nvme/nvme_rdma.o 00:03:25.164 CC lib/accel/accel.o 00:03:25.422 CC lib/accel/accel_rpc.o 00:03:25.422 CC lib/accel/accel_sw.o 00:03:25.422 CC lib/blob/blobstore.o 00:03:25.422 CC lib/init/json_config.o 00:03:25.679 CC lib/virtio/virtio.o 00:03:25.679 CC lib/init/subsystem.o 00:03:25.679 CC lib/init/subsystem_rpc.o 00:03:25.679 CC lib/virtio/virtio_vhost_user.o 00:03:25.679 CC lib/virtio/virtio_vfio_user.o 00:03:25.679 CC lib/init/rpc.o 00:03:25.957 CC lib/virtio/virtio_pci.o 00:03:25.957 LIB libspdk_init.a 00:03:25.957 SO libspdk_init.so.6.0 00:03:25.957 SYMLINK libspdk_init.so 00:03:25.957 CC lib/blob/request.o 00:03:25.957 CC lib/blob/zeroes.o 00:03:25.957 CC lib/fsdev/fsdev.o 00:03:25.957 CC lib/blob/blob_bs_dev.o 00:03:25.957 LIB libspdk_virtio.a 00:03:26.223 LIB libspdk_accel.a 00:03:26.223 SO libspdk_virtio.so.7.0 00:03:26.223 CC lib/event/app.o 00:03:26.223 CC lib/event/reactor.o 00:03:26.223 SO libspdk_accel.so.16.0 00:03:26.223 CC lib/event/log_rpc.o 00:03:26.223 SYMLINK libspdk_virtio.so 00:03:26.223 CC lib/event/app_rpc.o 00:03:26.223 SYMLINK libspdk_accel.so 00:03:26.223 CC lib/event/scheduler_static.o 00:03:26.223 CC lib/fsdev/fsdev_io.o 00:03:26.223 CC lib/fsdev/fsdev_rpc.o 00:03:26.480 CC lib/bdev/bdev.o 00:03:26.480 CC lib/bdev/bdev_rpc.o 00:03:26.480 LIB libspdk_nvme.a 00:03:26.480 CC lib/bdev/bdev_zone.o 00:03:26.480 CC lib/bdev/part.o 00:03:26.480 CC lib/bdev/scsi_nvme.o 00:03:26.480 SO libspdk_nvme.so.14.0 00:03:26.480 LIB libspdk_fsdev.a 00:03:26.480 LIB libspdk_event.a 00:03:26.738 SO libspdk_fsdev.so.1.0 00:03:26.738 SO libspdk_event.so.14.0 00:03:26.738 SYMLINK libspdk_fsdev.so 00:03:26.738 SYMLINK libspdk_event.so 00:03:26.738 SYMLINK libspdk_nvme.so 00:03:26.996 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:27.563 LIB libspdk_fuse_dispatcher.a 00:03:27.563 SO libspdk_fuse_dispatcher.so.1.0 00:03:27.563 SYMLINK libspdk_fuse_dispatcher.so 00:03:28.939 LIB libspdk_blob.a 00:03:28.939 SO libspdk_blob.so.11.0 00:03:28.939 SYMLINK libspdk_blob.so 00:03:28.939 CC lib/blobfs/blobfs.o 00:03:28.939 CC lib/blobfs/tree.o 00:03:28.939 CC lib/lvol/lvol.o 00:03:29.197 LIB libspdk_bdev.a 00:03:29.197 SO libspdk_bdev.so.16.0 00:03:29.455 SYMLINK libspdk_bdev.so 00:03:29.455 CC lib/ublk/ublk.o 00:03:29.455 CC lib/ublk/ublk_rpc.o 00:03:29.455 CC lib/nvmf/ctrlr.o 00:03:29.455 CC lib/ftl/ftl_core.o 00:03:29.455 CC lib/nvmf/ctrlr_discovery.o 00:03:29.455 CC lib/nvmf/ctrlr_bdev.o 00:03:29.455 CC lib/nbd/nbd.o 00:03:29.455 CC lib/scsi/dev.o 00:03:29.713 CC lib/scsi/lun.o 00:03:29.713 LIB libspdk_blobfs.a 00:03:29.713 SO libspdk_blobfs.so.10.0 00:03:29.713 CC lib/scsi/port.o 00:03:29.713 SYMLINK libspdk_blobfs.so 00:03:29.713 CC lib/scsi/scsi.o 00:03:29.713 CC lib/ftl/ftl_init.o 00:03:29.973 CC lib/scsi/scsi_bdev.o 00:03:29.973 CC lib/ftl/ftl_layout.o 00:03:29.973 CC lib/nbd/nbd_rpc.o 00:03:29.973 CC lib/ftl/ftl_debug.o 00:03:29.973 CC lib/scsi/scsi_pr.o 00:03:29.973 LIB libspdk_lvol.a 00:03:29.973 CC lib/scsi/scsi_rpc.o 00:03:29.973 SO libspdk_lvol.so.10.0 00:03:29.973 LIB libspdk_nbd.a 00:03:29.974 SO libspdk_nbd.so.7.0 00:03:29.974 SYMLINK libspdk_lvol.so 00:03:29.974 CC lib/scsi/task.o 00:03:30.232 SYMLINK libspdk_nbd.so 00:03:30.232 LIB libspdk_ublk.a 00:03:30.232 CC lib/ftl/ftl_io.o 00:03:30.232 CC lib/nvmf/subsystem.o 00:03:30.232 CC lib/nvmf/nvmf.o 00:03:30.232 SO libspdk_ublk.so.3.0 00:03:30.232 SYMLINK libspdk_ublk.so 00:03:30.232 CC lib/ftl/ftl_sb.o 00:03:30.232 CC lib/nvmf/nvmf_rpc.o 00:03:30.232 CC lib/nvmf/transport.o 00:03:30.232 CC lib/ftl/ftl_l2p.o 00:03:30.232 CC lib/ftl/ftl_l2p_flat.o 00:03:30.232 LIB libspdk_scsi.a 00:03:30.232 CC lib/ftl/ftl_nv_cache.o 00:03:30.232 SO libspdk_scsi.so.9.0 00:03:30.232 CC lib/ftl/ftl_band.o 00:03:30.490 SYMLINK libspdk_scsi.so 00:03:30.490 CC lib/ftl/ftl_band_ops.o 00:03:30.490 CC lib/ftl/ftl_writer.o 00:03:30.490 CC lib/ftl/ftl_rq.o 00:03:30.490 CC lib/ftl/ftl_reloc.o 00:03:30.748 CC lib/ftl/ftl_l2p_cache.o 00:03:30.748 CC lib/ftl/ftl_p2l.o 00:03:30.748 CC lib/ftl/ftl_p2l_log.o 00:03:31.006 CC lib/nvmf/tcp.o 00:03:31.006 CC lib/ftl/mngt/ftl_mngt.o 00:03:31.006 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:31.006 CC lib/iscsi/conn.o 00:03:31.006 CC lib/iscsi/init_grp.o 00:03:31.006 CC lib/nvmf/stubs.o 00:03:31.263 CC lib/iscsi/iscsi.o 00:03:31.263 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:31.263 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:31.263 CC lib/iscsi/param.o 00:03:31.263 CC lib/iscsi/portal_grp.o 00:03:31.263 CC lib/iscsi/tgt_node.o 00:03:31.263 CC lib/nvmf/mdns_server.o 00:03:31.263 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:31.263 CC lib/nvmf/rdma.o 00:03:31.521 CC lib/iscsi/iscsi_subsystem.o 00:03:31.521 CC lib/iscsi/iscsi_rpc.o 00:03:31.521 CC lib/nvmf/auth.o 00:03:31.521 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:31.521 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:31.779 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:31.779 CC lib/iscsi/task.o 00:03:31.779 CC lib/vhost/vhost.o 00:03:31.779 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:31.779 CC lib/vhost/vhost_rpc.o 00:03:31.779 CC lib/vhost/vhost_scsi.o 00:03:31.779 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:32.036 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:32.036 CC lib/vhost/vhost_blk.o 00:03:32.036 CC lib/vhost/rte_vhost_user.o 00:03:32.036 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:32.295 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:32.295 CC lib/ftl/utils/ftl_conf.o 00:03:32.295 CC lib/ftl/utils/ftl_md.o 00:03:32.295 CC lib/ftl/utils/ftl_mempool.o 00:03:32.553 CC lib/ftl/utils/ftl_bitmap.o 00:03:32.553 CC lib/ftl/utils/ftl_property.o 00:03:32.553 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:32.553 LIB libspdk_iscsi.a 00:03:32.553 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:32.553 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:32.553 SO libspdk_iscsi.so.8.0 00:03:32.553 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:32.553 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:32.553 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:32.811 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:32.811 SYMLINK libspdk_iscsi.so 00:03:32.811 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:32.811 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:32.811 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:32.811 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:32.811 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:32.811 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:32.811 CC lib/ftl/base/ftl_base_dev.o 00:03:32.811 LIB libspdk_vhost.a 00:03:32.811 CC lib/ftl/base/ftl_base_bdev.o 00:03:32.811 CC lib/ftl/ftl_trace.o 00:03:32.811 SO libspdk_vhost.so.8.0 00:03:33.071 SYMLINK libspdk_vhost.so 00:03:33.071 LIB libspdk_ftl.a 00:03:33.331 SO libspdk_ftl.so.9.0 00:03:33.589 LIB libspdk_nvmf.a 00:03:33.589 SYMLINK libspdk_ftl.so 00:03:33.589 SO libspdk_nvmf.so.19.0 00:03:33.847 SYMLINK libspdk_nvmf.so 00:03:34.106 CC module/env_dpdk/env_dpdk_rpc.o 00:03:34.106 CC module/keyring/file/keyring.o 00:03:34.106 CC module/accel/ioat/accel_ioat.o 00:03:34.106 CC module/fsdev/aio/fsdev_aio.o 00:03:34.106 CC module/blob/bdev/blob_bdev.o 00:03:34.106 CC module/accel/iaa/accel_iaa.o 00:03:34.106 CC module/sock/posix/posix.o 00:03:34.106 CC module/accel/dsa/accel_dsa.o 00:03:34.106 CC module/accel/error/accel_error.o 00:03:34.106 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:34.106 LIB libspdk_env_dpdk_rpc.a 00:03:34.364 SO libspdk_env_dpdk_rpc.so.6.0 00:03:34.364 SYMLINK libspdk_env_dpdk_rpc.so 00:03:34.364 CC module/keyring/file/keyring_rpc.o 00:03:34.364 CC module/accel/iaa/accel_iaa_rpc.o 00:03:34.364 CC module/accel/ioat/accel_ioat_rpc.o 00:03:34.364 CC module/accel/error/accel_error_rpc.o 00:03:34.364 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:34.364 LIB libspdk_scheduler_dynamic.a 00:03:34.364 LIB libspdk_blob_bdev.a 00:03:34.364 SO libspdk_scheduler_dynamic.so.4.0 00:03:34.364 SO libspdk_blob_bdev.so.11.0 00:03:34.364 LIB libspdk_accel_iaa.a 00:03:34.364 LIB libspdk_keyring_file.a 00:03:34.364 LIB libspdk_accel_ioat.a 00:03:34.364 SO libspdk_accel_iaa.so.3.0 00:03:34.364 SO libspdk_keyring_file.so.2.0 00:03:34.364 CC module/accel/dsa/accel_dsa_rpc.o 00:03:34.364 SO libspdk_accel_ioat.so.6.0 00:03:34.364 SYMLINK libspdk_blob_bdev.so 00:03:34.364 SYMLINK libspdk_scheduler_dynamic.so 00:03:34.364 LIB libspdk_accel_error.a 00:03:34.364 SYMLINK libspdk_keyring_file.so 00:03:34.364 CC module/fsdev/aio/linux_aio_mgr.o 00:03:34.364 SYMLINK libspdk_accel_iaa.so 00:03:34.622 SO libspdk_accel_error.so.2.0 00:03:34.622 SYMLINK libspdk_accel_ioat.so 00:03:34.622 SYMLINK libspdk_accel_error.so 00:03:34.622 LIB libspdk_accel_dsa.a 00:03:34.622 SO libspdk_accel_dsa.so.5.0 00:03:34.622 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:34.622 CC module/keyring/linux/keyring.o 00:03:34.622 CC module/scheduler/gscheduler/gscheduler.o 00:03:34.622 SYMLINK libspdk_accel_dsa.so 00:03:34.622 CC module/keyring/linux/keyring_rpc.o 00:03:34.622 CC module/bdev/delay/vbdev_delay.o 00:03:34.622 CC module/bdev/error/vbdev_error.o 00:03:34.622 CC module/blobfs/bdev/blobfs_bdev.o 00:03:34.622 LIB libspdk_scheduler_dpdk_governor.a 00:03:34.881 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:34.881 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:34.881 LIB libspdk_keyring_linux.a 00:03:34.881 LIB libspdk_scheduler_gscheduler.a 00:03:34.881 SO libspdk_scheduler_gscheduler.so.4.0 00:03:34.881 CC module/bdev/gpt/gpt.o 00:03:34.881 SO libspdk_keyring_linux.so.1.0 00:03:34.881 LIB libspdk_fsdev_aio.a 00:03:34.881 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:34.881 CC module/bdev/error/vbdev_error_rpc.o 00:03:34.881 SO libspdk_fsdev_aio.so.1.0 00:03:34.881 SYMLINK libspdk_scheduler_gscheduler.so 00:03:34.881 CC module/bdev/gpt/vbdev_gpt.o 00:03:34.881 SYMLINK libspdk_keyring_linux.so 00:03:34.881 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:34.881 LIB libspdk_sock_posix.a 00:03:34.881 LIB libspdk_blobfs_bdev.a 00:03:34.881 SYMLINK libspdk_fsdev_aio.so 00:03:34.881 SO libspdk_sock_posix.so.6.0 00:03:34.881 SO libspdk_blobfs_bdev.so.6.0 00:03:34.881 LIB libspdk_bdev_error.a 00:03:34.881 SYMLINK libspdk_sock_posix.so 00:03:34.881 SO libspdk_bdev_error.so.6.0 00:03:34.881 SYMLINK libspdk_blobfs_bdev.so 00:03:35.139 CC module/bdev/lvol/vbdev_lvol.o 00:03:35.139 SYMLINK libspdk_bdev_error.so 00:03:35.139 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:35.139 LIB libspdk_bdev_delay.a 00:03:35.139 CC module/bdev/malloc/bdev_malloc.o 00:03:35.139 CC module/bdev/null/bdev_null.o 00:03:35.139 SO libspdk_bdev_delay.so.6.0 00:03:35.139 CC module/bdev/nvme/bdev_nvme.o 00:03:35.139 LIB libspdk_bdev_gpt.a 00:03:35.139 CC module/bdev/passthru/vbdev_passthru.o 00:03:35.139 CC module/bdev/raid/bdev_raid.o 00:03:35.139 SYMLINK libspdk_bdev_delay.so 00:03:35.139 SO libspdk_bdev_gpt.so.6.0 00:03:35.139 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:35.139 CC module/bdev/split/vbdev_split.o 00:03:35.139 SYMLINK libspdk_bdev_gpt.so 00:03:35.139 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:35.398 CC module/bdev/split/vbdev_split_rpc.o 00:03:35.398 CC module/bdev/null/bdev_null_rpc.o 00:03:35.398 CC module/bdev/nvme/nvme_rpc.o 00:03:35.398 LIB libspdk_bdev_passthru.a 00:03:35.398 SO libspdk_bdev_passthru.so.6.0 00:03:35.398 LIB libspdk_bdev_split.a 00:03:35.398 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:35.398 SO libspdk_bdev_split.so.6.0 00:03:35.398 SYMLINK libspdk_bdev_passthru.so 00:03:35.398 CC module/bdev/nvme/bdev_mdns_client.o 00:03:35.398 SYMLINK libspdk_bdev_split.so 00:03:35.398 CC module/bdev/nvme/vbdev_opal.o 00:03:35.398 LIB libspdk_bdev_null.a 00:03:35.656 SO libspdk_bdev_null.so.6.0 00:03:35.656 LIB libspdk_bdev_lvol.a 00:03:35.656 SO libspdk_bdev_lvol.so.6.0 00:03:35.656 SYMLINK libspdk_bdev_null.so 00:03:35.656 LIB libspdk_bdev_malloc.a 00:03:35.656 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:35.656 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:35.656 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:35.656 SO libspdk_bdev_malloc.so.6.0 00:03:35.656 SYMLINK libspdk_bdev_lvol.so 00:03:35.656 SYMLINK libspdk_bdev_malloc.so 00:03:35.656 CC module/bdev/xnvme/bdev_xnvme.o 00:03:35.656 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:35.656 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:35.657 CC module/bdev/aio/bdev_aio.o 00:03:35.915 CC module/bdev/ftl/bdev_ftl.o 00:03:35.915 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:35.915 CC module/bdev/aio/bdev_aio_rpc.o 00:03:35.915 CC module/bdev/raid/bdev_raid_rpc.o 00:03:35.915 LIB libspdk_bdev_zone_block.a 00:03:35.915 SO libspdk_bdev_zone_block.so.6.0 00:03:35.915 LIB libspdk_bdev_xnvme.a 00:03:35.915 SO libspdk_bdev_xnvme.so.3.0 00:03:36.173 SYMLINK libspdk_bdev_zone_block.so 00:03:36.173 CC module/bdev/raid/bdev_raid_sb.o 00:03:36.173 CC module/bdev/raid/raid0.o 00:03:36.173 SYMLINK libspdk_bdev_xnvme.so 00:03:36.173 CC module/bdev/raid/raid1.o 00:03:36.173 LIB libspdk_bdev_ftl.a 00:03:36.173 CC module/bdev/iscsi/bdev_iscsi.o 00:03:36.173 CC module/bdev/raid/concat.o 00:03:36.173 SO libspdk_bdev_ftl.so.6.0 00:03:36.173 LIB libspdk_bdev_aio.a 00:03:36.173 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:36.173 SO libspdk_bdev_aio.so.6.0 00:03:36.173 SYMLINK libspdk_bdev_ftl.so 00:03:36.173 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:36.173 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:36.173 SYMLINK libspdk_bdev_aio.so 00:03:36.173 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:36.432 LIB libspdk_bdev_raid.a 00:03:36.432 SO libspdk_bdev_raid.so.6.0 00:03:36.432 LIB libspdk_bdev_iscsi.a 00:03:36.432 SYMLINK libspdk_bdev_raid.so 00:03:36.432 SO libspdk_bdev_iscsi.so.6.0 00:03:36.432 SYMLINK libspdk_bdev_iscsi.so 00:03:36.689 LIB libspdk_bdev_virtio.a 00:03:36.689 SO libspdk_bdev_virtio.so.6.0 00:03:36.689 SYMLINK libspdk_bdev_virtio.so 00:03:36.947 LIB libspdk_bdev_nvme.a 00:03:36.947 SO libspdk_bdev_nvme.so.7.0 00:03:36.947 SYMLINK libspdk_bdev_nvme.so 00:03:37.514 CC module/event/subsystems/sock/sock.o 00:03:37.514 CC module/event/subsystems/fsdev/fsdev.o 00:03:37.514 CC module/event/subsystems/vmd/vmd.o 00:03:37.514 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:37.514 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:37.514 CC module/event/subsystems/scheduler/scheduler.o 00:03:37.514 CC module/event/subsystems/keyring/keyring.o 00:03:37.514 CC module/event/subsystems/iobuf/iobuf.o 00:03:37.514 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:37.514 LIB libspdk_event_sock.a 00:03:37.514 LIB libspdk_event_keyring.a 00:03:37.514 LIB libspdk_event_vhost_blk.a 00:03:37.514 LIB libspdk_event_fsdev.a 00:03:37.514 LIB libspdk_event_vmd.a 00:03:37.514 LIB libspdk_event_scheduler.a 00:03:37.514 SO libspdk_event_sock.so.5.0 00:03:37.514 SO libspdk_event_keyring.so.1.0 00:03:37.514 SO libspdk_event_scheduler.so.4.0 00:03:37.514 SO libspdk_event_fsdev.so.1.0 00:03:37.514 SO libspdk_event_vmd.so.6.0 00:03:37.514 SO libspdk_event_vhost_blk.so.3.0 00:03:37.514 LIB libspdk_event_iobuf.a 00:03:37.514 SO libspdk_event_iobuf.so.3.0 00:03:37.774 SYMLINK libspdk_event_fsdev.so 00:03:37.774 SYMLINK libspdk_event_sock.so 00:03:37.774 SYMLINK libspdk_event_scheduler.so 00:03:37.774 SYMLINK libspdk_event_keyring.so 00:03:37.774 SYMLINK libspdk_event_vhost_blk.so 00:03:37.774 SYMLINK libspdk_event_vmd.so 00:03:37.774 SYMLINK libspdk_event_iobuf.so 00:03:38.033 CC module/event/subsystems/accel/accel.o 00:03:38.033 LIB libspdk_event_accel.a 00:03:38.033 SO libspdk_event_accel.so.6.0 00:03:38.292 SYMLINK libspdk_event_accel.so 00:03:38.552 CC module/event/subsystems/bdev/bdev.o 00:03:38.552 LIB libspdk_event_bdev.a 00:03:38.552 SO libspdk_event_bdev.so.6.0 00:03:38.813 SYMLINK libspdk_event_bdev.so 00:03:38.813 CC module/event/subsystems/scsi/scsi.o 00:03:38.813 CC module/event/subsystems/nbd/nbd.o 00:03:38.813 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:38.813 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:38.813 CC module/event/subsystems/ublk/ublk.o 00:03:39.071 LIB libspdk_event_nbd.a 00:03:39.071 LIB libspdk_event_scsi.a 00:03:39.071 LIB libspdk_event_ublk.a 00:03:39.071 SO libspdk_event_nbd.so.6.0 00:03:39.071 SO libspdk_event_scsi.so.6.0 00:03:39.071 SO libspdk_event_ublk.so.3.0 00:03:39.071 SYMLINK libspdk_event_nbd.so 00:03:39.071 SYMLINK libspdk_event_scsi.so 00:03:39.071 SYMLINK libspdk_event_ublk.so 00:03:39.071 LIB libspdk_event_nvmf.a 00:03:39.071 SO libspdk_event_nvmf.so.6.0 00:03:39.071 SYMLINK libspdk_event_nvmf.so 00:03:39.330 CC module/event/subsystems/iscsi/iscsi.o 00:03:39.330 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:39.330 LIB libspdk_event_vhost_scsi.a 00:03:39.330 LIB libspdk_event_iscsi.a 00:03:39.330 SO libspdk_event_vhost_scsi.so.3.0 00:03:39.588 SO libspdk_event_iscsi.so.6.0 00:03:39.588 SYMLINK libspdk_event_vhost_scsi.so 00:03:39.588 SYMLINK libspdk_event_iscsi.so 00:03:39.588 SO libspdk.so.6.0 00:03:39.588 SYMLINK libspdk.so 00:03:39.846 CC app/trace_record/trace_record.o 00:03:39.846 CXX app/trace/trace.o 00:03:39.846 CC app/spdk_lspci/spdk_lspci.o 00:03:39.846 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:39.846 CC app/nvmf_tgt/nvmf_main.o 00:03:39.846 CC app/iscsi_tgt/iscsi_tgt.o 00:03:39.846 CC examples/ioat/perf/perf.o 00:03:39.846 CC examples/util/zipf/zipf.o 00:03:39.846 CC app/spdk_tgt/spdk_tgt.o 00:03:39.846 CC test/thread/poller_perf/poller_perf.o 00:03:40.104 LINK spdk_lspci 00:03:40.104 LINK zipf 00:03:40.104 LINK interrupt_tgt 00:03:40.104 LINK nvmf_tgt 00:03:40.104 LINK spdk_trace_record 00:03:40.104 LINK spdk_tgt 00:03:40.104 LINK poller_perf 00:03:40.104 LINK iscsi_tgt 00:03:40.104 LINK ioat_perf 00:03:40.104 CC app/spdk_nvme_perf/perf.o 00:03:40.104 LINK spdk_trace 00:03:40.104 CC app/spdk_nvme_identify/identify.o 00:03:40.362 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.362 CC app/spdk_top/spdk_top.o 00:03:40.362 CC examples/ioat/verify/verify.o 00:03:40.362 CC app/spdk_dd/spdk_dd.o 00:03:40.362 CC test/dma/test_dma/test_dma.o 00:03:40.362 CC examples/thread/thread/thread_ex.o 00:03:40.362 CC examples/sock/hello_world/hello_sock.o 00:03:40.362 LINK spdk_nvme_discover 00:03:40.362 LINK verify 00:03:40.362 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.620 LINK lsvmd 00:03:40.620 LINK hello_sock 00:03:40.620 CC examples/vmd/led/led.o 00:03:40.620 LINK thread 00:03:40.620 LINK spdk_dd 00:03:40.620 CC app/fio/nvme/fio_plugin.o 00:03:40.885 LINK led 00:03:40.885 LINK test_dma 00:03:40.885 CC app/fio/bdev/fio_plugin.o 00:03:40.885 CC examples/idxd/perf/perf.o 00:03:40.885 CC test/app/bdev_svc/bdev_svc.o 00:03:40.885 CC app/vhost/vhost.o 00:03:41.162 LINK spdk_nvme_perf 00:03:41.162 TEST_HEADER include/spdk/accel.h 00:03:41.162 TEST_HEADER include/spdk/accel_module.h 00:03:41.162 TEST_HEADER include/spdk/assert.h 00:03:41.162 LINK spdk_nvme_identify 00:03:41.162 TEST_HEADER include/spdk/barrier.h 00:03:41.162 TEST_HEADER include/spdk/base64.h 00:03:41.162 TEST_HEADER include/spdk/bdev.h 00:03:41.162 TEST_HEADER include/spdk/bdev_module.h 00:03:41.162 TEST_HEADER include/spdk/bdev_zone.h 00:03:41.162 TEST_HEADER include/spdk/bit_array.h 00:03:41.162 TEST_HEADER include/spdk/bit_pool.h 00:03:41.162 TEST_HEADER include/spdk/blob_bdev.h 00:03:41.162 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:41.162 TEST_HEADER include/spdk/blobfs.h 00:03:41.162 TEST_HEADER include/spdk/blob.h 00:03:41.162 TEST_HEADER include/spdk/conf.h 00:03:41.162 TEST_HEADER include/spdk/config.h 00:03:41.162 TEST_HEADER include/spdk/cpuset.h 00:03:41.162 TEST_HEADER include/spdk/crc16.h 00:03:41.162 TEST_HEADER include/spdk/crc32.h 00:03:41.162 TEST_HEADER include/spdk/crc64.h 00:03:41.162 TEST_HEADER include/spdk/dif.h 00:03:41.162 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:41.162 TEST_HEADER include/spdk/dma.h 00:03:41.162 TEST_HEADER include/spdk/endian.h 00:03:41.162 TEST_HEADER include/spdk/env_dpdk.h 00:03:41.162 TEST_HEADER include/spdk/env.h 00:03:41.162 TEST_HEADER include/spdk/event.h 00:03:41.162 TEST_HEADER include/spdk/fd_group.h 00:03:41.162 TEST_HEADER include/spdk/fd.h 00:03:41.162 TEST_HEADER include/spdk/file.h 00:03:41.162 TEST_HEADER include/spdk/fsdev.h 00:03:41.162 TEST_HEADER include/spdk/fsdev_module.h 00:03:41.162 TEST_HEADER include/spdk/ftl.h 00:03:41.162 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:41.162 TEST_HEADER include/spdk/gpt_spec.h 00:03:41.162 TEST_HEADER include/spdk/hexlify.h 00:03:41.162 TEST_HEADER include/spdk/histogram_data.h 00:03:41.162 TEST_HEADER include/spdk/idxd.h 00:03:41.162 TEST_HEADER include/spdk/idxd_spec.h 00:03:41.162 TEST_HEADER include/spdk/init.h 00:03:41.162 TEST_HEADER include/spdk/ioat.h 00:03:41.162 TEST_HEADER include/spdk/ioat_spec.h 00:03:41.162 TEST_HEADER include/spdk/iscsi_spec.h 00:03:41.162 TEST_HEADER include/spdk/json.h 00:03:41.162 TEST_HEADER include/spdk/jsonrpc.h 00:03:41.162 TEST_HEADER include/spdk/keyring.h 00:03:41.162 TEST_HEADER include/spdk/keyring_module.h 00:03:41.162 TEST_HEADER include/spdk/likely.h 00:03:41.162 TEST_HEADER include/spdk/log.h 00:03:41.162 TEST_HEADER include/spdk/lvol.h 00:03:41.162 TEST_HEADER include/spdk/md5.h 00:03:41.162 TEST_HEADER include/spdk/memory.h 00:03:41.162 TEST_HEADER include/spdk/mmio.h 00:03:41.162 TEST_HEADER include/spdk/nbd.h 00:03:41.162 TEST_HEADER include/spdk/net.h 00:03:41.162 TEST_HEADER include/spdk/notify.h 00:03:41.162 TEST_HEADER include/spdk/nvme.h 00:03:41.162 TEST_HEADER include/spdk/nvme_intel.h 00:03:41.162 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:41.162 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:41.162 LINK bdev_svc 00:03:41.162 TEST_HEADER include/spdk/nvme_spec.h 00:03:41.162 TEST_HEADER include/spdk/nvme_zns.h 00:03:41.162 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:41.162 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:41.162 TEST_HEADER include/spdk/nvmf.h 00:03:41.162 TEST_HEADER include/spdk/nvmf_spec.h 00:03:41.162 TEST_HEADER include/spdk/nvmf_transport.h 00:03:41.162 TEST_HEADER include/spdk/opal.h 00:03:41.162 TEST_HEADER include/spdk/opal_spec.h 00:03:41.162 TEST_HEADER include/spdk/pci_ids.h 00:03:41.162 TEST_HEADER include/spdk/pipe.h 00:03:41.162 TEST_HEADER include/spdk/queue.h 00:03:41.162 TEST_HEADER include/spdk/reduce.h 00:03:41.162 TEST_HEADER include/spdk/rpc.h 00:03:41.162 TEST_HEADER include/spdk/scheduler.h 00:03:41.162 TEST_HEADER include/spdk/scsi.h 00:03:41.162 TEST_HEADER include/spdk/scsi_spec.h 00:03:41.162 TEST_HEADER include/spdk/sock.h 00:03:41.162 TEST_HEADER include/spdk/stdinc.h 00:03:41.162 TEST_HEADER include/spdk/string.h 00:03:41.162 LINK vhost 00:03:41.162 TEST_HEADER include/spdk/thread.h 00:03:41.162 TEST_HEADER include/spdk/trace.h 00:03:41.162 TEST_HEADER include/spdk/trace_parser.h 00:03:41.162 TEST_HEADER include/spdk/tree.h 00:03:41.162 TEST_HEADER include/spdk/ublk.h 00:03:41.162 TEST_HEADER include/spdk/util.h 00:03:41.162 TEST_HEADER include/spdk/uuid.h 00:03:41.162 TEST_HEADER include/spdk/version.h 00:03:41.162 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:41.162 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:41.162 TEST_HEADER include/spdk/vhost.h 00:03:41.162 TEST_HEADER include/spdk/vmd.h 00:03:41.162 TEST_HEADER include/spdk/xor.h 00:03:41.162 TEST_HEADER include/spdk/zipf.h 00:03:41.162 CXX test/cpp_headers/accel.o 00:03:41.162 LINK spdk_top 00:03:41.162 LINK idxd_perf 00:03:41.438 LINK spdk_nvme 00:03:41.438 CC examples/accel/perf/accel_perf.o 00:03:41.438 LINK spdk_bdev 00:03:41.438 LINK hello_fsdev 00:03:41.438 CXX test/cpp_headers/accel_module.o 00:03:41.439 CXX test/cpp_headers/assert.o 00:03:41.439 CC examples/blob/hello_world/hello_blob.o 00:03:41.439 CXX test/cpp_headers/barrier.o 00:03:41.439 CXX test/cpp_headers/base64.o 00:03:41.439 CC test/app/jsoncat/jsoncat.o 00:03:41.439 CC test/app/histogram_perf/histogram_perf.o 00:03:41.439 CXX test/cpp_headers/bdev.o 00:03:41.439 CXX test/cpp_headers/bdev_module.o 00:03:41.699 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:41.699 LINK jsoncat 00:03:41.699 CXX test/cpp_headers/bdev_zone.o 00:03:41.699 LINK histogram_perf 00:03:41.699 LINK hello_blob 00:03:41.699 CXX test/cpp_headers/bit_array.o 00:03:41.699 CC examples/nvme/hello_world/hello_world.o 00:03:41.699 CXX test/cpp_headers/bit_pool.o 00:03:41.699 CC test/env/mem_callbacks/mem_callbacks.o 00:03:41.699 CC test/env/vtophys/vtophys.o 00:03:41.959 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:41.959 LINK accel_perf 00:03:41.959 CC examples/nvme/reconnect/reconnect.o 00:03:41.959 CXX test/cpp_headers/blob_bdev.o 00:03:41.959 CC examples/blob/cli/blobcli.o 00:03:41.959 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:41.959 LINK hello_world 00:03:41.959 LINK vtophys 00:03:41.959 LINK env_dpdk_post_init 00:03:41.959 LINK nvme_fuzz 00:03:42.219 CC examples/nvme/arbitration/arbitration.o 00:03:42.219 CXX test/cpp_headers/blobfs_bdev.o 00:03:42.219 CXX test/cpp_headers/blobfs.o 00:03:42.219 CC examples/nvme/hotplug/hotplug.o 00:03:42.219 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:42.219 CXX test/cpp_headers/blob.o 00:03:42.219 LINK reconnect 00:03:42.479 LINK mem_callbacks 00:03:42.479 CC test/app/stub/stub.o 00:03:42.479 CC examples/bdev/hello_world/hello_bdev.o 00:03:42.479 LINK blobcli 00:03:42.479 LINK arbitration 00:03:42.479 CXX test/cpp_headers/conf.o 00:03:42.479 LINK hotplug 00:03:42.479 LINK nvme_manage 00:03:42.479 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:42.479 CC test/env/memory/memory_ut.o 00:03:42.479 LINK stub 00:03:42.479 CXX test/cpp_headers/config.o 00:03:42.740 LINK hello_bdev 00:03:42.740 CXX test/cpp_headers/cpuset.o 00:03:42.740 CC test/env/pci/pci_ut.o 00:03:42.740 CC examples/nvme/abort/abort.o 00:03:42.740 LINK cmb_copy 00:03:42.740 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:42.740 CC test/event/event_perf/event_perf.o 00:03:42.740 CXX test/cpp_headers/crc16.o 00:03:42.740 CC examples/bdev/bdevperf/bdevperf.o 00:03:42.740 LINK pmr_persistence 00:03:42.740 LINK event_perf 00:03:43.000 CXX test/cpp_headers/crc32.o 00:03:43.001 CC test/event/reactor/reactor.o 00:03:43.001 CC test/nvme/aer/aer.o 00:03:43.001 LINK pci_ut 00:03:43.001 LINK abort 00:03:43.001 LINK reactor 00:03:43.001 CC test/rpc_client/rpc_client_test.o 00:03:43.001 CXX test/cpp_headers/crc64.o 00:03:43.262 CC test/accel/dif/dif.o 00:03:43.262 CXX test/cpp_headers/dif.o 00:03:43.262 CC test/event/reactor_perf/reactor_perf.o 00:03:43.262 LINK rpc_client_test 00:03:43.262 LINK aer 00:03:43.262 CC test/blobfs/mkfs/mkfs.o 00:03:43.262 LINK reactor_perf 00:03:43.262 CXX test/cpp_headers/dma.o 00:03:43.262 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:43.522 CC test/nvme/reset/reset.o 00:03:43.522 CC test/lvol/esnap/esnap.o 00:03:43.522 CXX test/cpp_headers/endian.o 00:03:43.522 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:43.522 LINK mkfs 00:03:43.522 CC test/event/app_repeat/app_repeat.o 00:03:43.522 LINK bdevperf 00:03:43.522 CXX test/cpp_headers/env_dpdk.o 00:03:43.522 LINK memory_ut 00:03:43.522 LINK reset 00:03:43.522 LINK app_repeat 00:03:43.784 LINK dif 00:03:43.784 CC test/event/scheduler/scheduler.o 00:03:43.784 CXX test/cpp_headers/env.o 00:03:43.784 CXX test/cpp_headers/event.o 00:03:43.784 CC test/nvme/sgl/sgl.o 00:03:43.784 CC test/nvme/e2edp/nvme_dp.o 00:03:43.784 CC test/nvme/overhead/overhead.o 00:03:43.784 CXX test/cpp_headers/fd_group.o 00:03:43.784 LINK vhost_fuzz 00:03:44.046 LINK scheduler 00:03:44.046 LINK iscsi_fuzz 00:03:44.046 CC examples/nvmf/nvmf/nvmf.o 00:03:44.046 CXX test/cpp_headers/fd.o 00:03:44.046 CXX test/cpp_headers/file.o 00:03:44.046 CC test/bdev/bdevio/bdevio.o 00:03:44.046 CXX test/cpp_headers/fsdev.o 00:03:44.046 LINK nvme_dp 00:03:44.046 LINK sgl 00:03:44.046 CXX test/cpp_headers/fsdev_module.o 00:03:44.046 CXX test/cpp_headers/ftl.o 00:03:44.046 CXX test/cpp_headers/fuse_dispatcher.o 00:03:44.308 LINK overhead 00:03:44.308 CXX test/cpp_headers/gpt_spec.o 00:03:44.308 LINK nvmf 00:03:44.308 CC test/nvme/err_injection/err_injection.o 00:03:44.308 CXX test/cpp_headers/hexlify.o 00:03:44.308 CC test/nvme/startup/startup.o 00:03:44.308 CXX test/cpp_headers/histogram_data.o 00:03:44.308 CXX test/cpp_headers/idxd.o 00:03:44.308 CXX test/cpp_headers/idxd_spec.o 00:03:44.308 CC test/nvme/reserve/reserve.o 00:03:44.308 CXX test/cpp_headers/init.o 00:03:44.308 CXX test/cpp_headers/ioat.o 00:03:44.570 LINK err_injection 00:03:44.570 LINK bdevio 00:03:44.570 LINK startup 00:03:44.570 CXX test/cpp_headers/ioat_spec.o 00:03:44.570 CC test/nvme/simple_copy/simple_copy.o 00:03:44.570 LINK reserve 00:03:44.570 CXX test/cpp_headers/iscsi_spec.o 00:03:44.570 CXX test/cpp_headers/json.o 00:03:44.570 CC test/nvme/connect_stress/connect_stress.o 00:03:44.570 CXX test/cpp_headers/jsonrpc.o 00:03:44.570 CC test/nvme/boot_partition/boot_partition.o 00:03:44.570 CC test/nvme/compliance/nvme_compliance.o 00:03:44.570 CC test/nvme/fused_ordering/fused_ordering.o 00:03:44.831 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:44.831 CXX test/cpp_headers/keyring.o 00:03:44.831 LINK connect_stress 00:03:44.831 CC test/nvme/fdp/fdp.o 00:03:44.831 LINK simple_copy 00:03:44.831 CC test/nvme/cuse/cuse.o 00:03:44.831 LINK boot_partition 00:03:44.831 LINK fused_ordering 00:03:44.831 CXX test/cpp_headers/keyring_module.o 00:03:44.831 CXX test/cpp_headers/likely.o 00:03:44.831 CXX test/cpp_headers/log.o 00:03:44.831 LINK doorbell_aers 00:03:44.831 CXX test/cpp_headers/lvol.o 00:03:45.092 LINK nvme_compliance 00:03:45.092 CXX test/cpp_headers/md5.o 00:03:45.092 CXX test/cpp_headers/memory.o 00:03:45.092 CXX test/cpp_headers/mmio.o 00:03:45.092 CXX test/cpp_headers/nbd.o 00:03:45.092 CXX test/cpp_headers/net.o 00:03:45.092 CXX test/cpp_headers/notify.o 00:03:45.092 CXX test/cpp_headers/nvme.o 00:03:45.092 CXX test/cpp_headers/nvme_intel.o 00:03:45.092 CXX test/cpp_headers/nvme_ocssd.o 00:03:45.092 LINK fdp 00:03:45.092 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:45.092 CXX test/cpp_headers/nvme_spec.o 00:03:45.092 CXX test/cpp_headers/nvme_zns.o 00:03:45.351 CXX test/cpp_headers/nvmf_cmd.o 00:03:45.351 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:45.351 CXX test/cpp_headers/nvmf.o 00:03:45.351 CXX test/cpp_headers/nvmf_spec.o 00:03:45.351 CXX test/cpp_headers/nvmf_transport.o 00:03:45.351 CXX test/cpp_headers/opal.o 00:03:45.351 CXX test/cpp_headers/opal_spec.o 00:03:45.351 CXX test/cpp_headers/pci_ids.o 00:03:45.351 CXX test/cpp_headers/pipe.o 00:03:45.351 CXX test/cpp_headers/queue.o 00:03:45.351 CXX test/cpp_headers/reduce.o 00:03:45.351 CXX test/cpp_headers/rpc.o 00:03:45.351 CXX test/cpp_headers/scheduler.o 00:03:45.351 CXX test/cpp_headers/scsi.o 00:03:45.610 CXX test/cpp_headers/scsi_spec.o 00:03:45.610 CXX test/cpp_headers/sock.o 00:03:45.610 CXX test/cpp_headers/stdinc.o 00:03:45.610 CXX test/cpp_headers/string.o 00:03:45.610 CXX test/cpp_headers/thread.o 00:03:45.610 CXX test/cpp_headers/trace.o 00:03:45.610 CXX test/cpp_headers/trace_parser.o 00:03:45.610 CXX test/cpp_headers/tree.o 00:03:45.610 CXX test/cpp_headers/ublk.o 00:03:45.610 CXX test/cpp_headers/util.o 00:03:45.610 CXX test/cpp_headers/uuid.o 00:03:45.610 CXX test/cpp_headers/version.o 00:03:45.610 CXX test/cpp_headers/vfio_user_pci.o 00:03:45.610 CXX test/cpp_headers/vfio_user_spec.o 00:03:45.610 CXX test/cpp_headers/vhost.o 00:03:45.610 CXX test/cpp_headers/vmd.o 00:03:45.610 CXX test/cpp_headers/xor.o 00:03:45.610 CXX test/cpp_headers/zipf.o 00:03:46.181 LINK cuse 00:03:48.095 LINK esnap 00:03:48.357 00:03:48.357 real 1m7.856s 00:03:48.357 user 6m16.702s 00:03:48.357 sys 1m11.460s 00:03:48.357 03:30:40 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:48.357 ************************************ 00:03:48.357 END TEST make 00:03:48.357 ************************************ 00:03:48.357 03:30:40 make -- common/autotest_common.sh@10 -- $ set +x 00:03:48.357 03:30:40 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:48.357 03:30:40 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:48.357 03:30:40 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:48.357 03:30:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.357 03:30:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:48.357 03:30:40 -- pm/common@44 -- $ pid=5065 00:03:48.357 03:30:40 -- pm/common@50 -- $ kill -TERM 5065 00:03:48.357 03:30:40 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.357 03:30:40 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:48.357 03:30:40 -- pm/common@44 -- $ pid=5067 00:03:48.357 03:30:40 -- pm/common@50 -- $ kill -TERM 5067 00:03:48.357 03:30:40 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:03:48.357 03:30:40 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:03:48.357 03:30:40 -- common/autotest_common.sh@1681 -- # lcov --version 00:03:48.357 03:30:40 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:03:48.357 03:30:40 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:48.357 03:30:40 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:48.357 03:30:40 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:48.357 03:30:40 -- scripts/common.sh@336 -- # IFS=.-: 00:03:48.357 03:30:40 -- scripts/common.sh@336 -- # read -ra ver1 00:03:48.358 03:30:40 -- scripts/common.sh@337 -- # IFS=.-: 00:03:48.358 03:30:40 -- scripts/common.sh@337 -- # read -ra ver2 00:03:48.358 03:30:40 -- scripts/common.sh@338 -- # local 'op=<' 00:03:48.358 03:30:40 -- scripts/common.sh@340 -- # ver1_l=2 00:03:48.358 03:30:40 -- scripts/common.sh@341 -- # ver2_l=1 00:03:48.358 03:30:40 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:48.358 03:30:40 -- scripts/common.sh@344 -- # case "$op" in 00:03:48.358 03:30:40 -- scripts/common.sh@345 -- # : 1 00:03:48.358 03:30:40 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:48.358 03:30:40 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:48.358 03:30:40 -- scripts/common.sh@365 -- # decimal 1 00:03:48.358 03:30:40 -- scripts/common.sh@353 -- # local d=1 00:03:48.358 03:30:40 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:48.358 03:30:40 -- scripts/common.sh@355 -- # echo 1 00:03:48.358 03:30:40 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:48.358 03:30:40 -- scripts/common.sh@366 -- # decimal 2 00:03:48.358 03:30:40 -- scripts/common.sh@353 -- # local d=2 00:03:48.358 03:30:40 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:48.358 03:30:40 -- scripts/common.sh@355 -- # echo 2 00:03:48.358 03:30:40 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:48.358 03:30:40 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:48.358 03:30:40 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:48.358 03:30:40 -- scripts/common.sh@368 -- # return 0 00:03:48.358 03:30:40 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:48.358 03:30:40 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:03:48.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.358 --rc genhtml_branch_coverage=1 00:03:48.358 --rc genhtml_function_coverage=1 00:03:48.358 --rc genhtml_legend=1 00:03:48.358 --rc geninfo_all_blocks=1 00:03:48.358 --rc geninfo_unexecuted_blocks=1 00:03:48.358 00:03:48.358 ' 00:03:48.358 03:30:40 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:03:48.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.358 --rc genhtml_branch_coverage=1 00:03:48.358 --rc genhtml_function_coverage=1 00:03:48.358 --rc genhtml_legend=1 00:03:48.358 --rc geninfo_all_blocks=1 00:03:48.358 --rc geninfo_unexecuted_blocks=1 00:03:48.358 00:03:48.358 ' 00:03:48.358 03:30:40 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:03:48.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.358 --rc genhtml_branch_coverage=1 00:03:48.358 --rc genhtml_function_coverage=1 00:03:48.358 --rc genhtml_legend=1 00:03:48.358 --rc geninfo_all_blocks=1 00:03:48.358 --rc geninfo_unexecuted_blocks=1 00:03:48.358 00:03:48.358 ' 00:03:48.358 03:30:40 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:03:48.358 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:48.358 --rc genhtml_branch_coverage=1 00:03:48.358 --rc genhtml_function_coverage=1 00:03:48.358 --rc genhtml_legend=1 00:03:48.358 --rc geninfo_all_blocks=1 00:03:48.358 --rc geninfo_unexecuted_blocks=1 00:03:48.358 00:03:48.358 ' 00:03:48.358 03:30:40 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:48.358 03:30:40 -- nvmf/common.sh@7 -- # uname -s 00:03:48.620 03:30:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:48.620 03:30:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:48.620 03:30:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:48.620 03:30:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:48.620 03:30:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:48.620 03:30:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:48.620 03:30:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:48.620 03:30:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:48.620 03:30:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:48.620 03:30:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:48.620 03:30:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ab7b33bf-904a-48ad-bfbc-0fc5fd07eef8 00:03:48.620 03:30:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=ab7b33bf-904a-48ad-bfbc-0fc5fd07eef8 00:03:48.620 03:30:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:48.620 03:30:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:48.620 03:30:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:48.620 03:30:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:48.620 03:30:40 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:48.620 03:30:40 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:48.620 03:30:40 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:48.620 03:30:40 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:48.620 03:30:40 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:48.620 03:30:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.620 03:30:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.620 03:30:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.620 03:30:40 -- paths/export.sh@5 -- # export PATH 00:03:48.620 03:30:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:48.620 03:30:40 -- nvmf/common.sh@51 -- # : 0 00:03:48.620 03:30:40 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:48.620 03:30:40 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:48.620 03:30:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:48.620 03:30:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:48.620 03:30:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:48.620 03:30:40 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:48.620 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:48.620 03:30:40 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:48.620 03:30:40 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:48.620 03:30:40 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:48.620 03:30:40 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:48.620 03:30:40 -- spdk/autotest.sh@32 -- # uname -s 00:03:48.620 03:30:40 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:48.620 03:30:40 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:48.620 03:30:40 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:48.620 03:30:40 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:48.620 03:30:40 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:48.620 03:30:40 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:48.620 03:30:40 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:48.620 03:30:40 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:48.620 03:30:40 -- spdk/autotest.sh@48 -- # udevadm_pid=54614 00:03:48.620 03:30:40 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:48.620 03:30:40 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:48.620 03:30:40 -- pm/common@17 -- # local monitor 00:03:48.620 03:30:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.620 03:30:40 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:48.620 03:30:40 -- pm/common@25 -- # sleep 1 00:03:48.620 03:30:40 -- pm/common@21 -- # date +%s 00:03:48.620 03:30:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727753440 00:03:48.620 03:30:40 -- pm/common@21 -- # date +%s 00:03:48.620 03:30:40 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1727753440 00:03:48.620 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727753440_collect-vmstat.pm.log 00:03:48.620 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1727753440_collect-cpu-load.pm.log 00:03:49.566 03:30:41 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:49.566 03:30:41 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:49.566 03:30:41 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:49.566 03:30:41 -- common/autotest_common.sh@10 -- # set +x 00:03:49.566 03:30:41 -- spdk/autotest.sh@59 -- # create_test_list 00:03:49.566 03:30:41 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:49.566 03:30:41 -- common/autotest_common.sh@10 -- # set +x 00:03:49.566 03:30:42 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:49.566 03:30:42 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:49.566 03:30:42 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:49.566 03:30:42 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:49.566 03:30:42 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:49.566 03:30:42 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:49.566 03:30:42 -- common/autotest_common.sh@1455 -- # uname 00:03:49.566 03:30:42 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:49.566 03:30:42 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:49.566 03:30:42 -- common/autotest_common.sh@1475 -- # uname 00:03:49.566 03:30:42 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:49.566 03:30:42 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:49.566 03:30:42 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:49.566 lcov: LCOV version 1.15 00:03:49.566 03:30:42 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:04.520 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:04.520 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:19.425 03:31:11 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:19.425 03:31:11 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:19.425 03:31:11 -- common/autotest_common.sh@10 -- # set +x 00:04:19.425 03:31:11 -- spdk/autotest.sh@78 -- # rm -f 00:04:19.425 03:31:11 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:19.425 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:19.687 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:19.687 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:19.687 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:19.687 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:19.687 03:31:12 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:19.687 03:31:12 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:19.687 03:31:12 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:19.687 03:31:12 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:19.687 03:31:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:19.687 03:31:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:19.687 03:31:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:19.687 03:31:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:19.687 03:31:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:19.687 03:31:12 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:19.687 03:31:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:19.687 03:31:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:19.687 03:31:12 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:19.687 03:31:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:19.687 03:31:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:19.687 03:31:12 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:19.687 03:31:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:19.687 03:31:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:19.687 03:31:12 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:19.687 03:31:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:19.687 03:31:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:19.687 03:31:12 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:19.687 03:31:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:19.687 03:31:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:19.687 03:31:12 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:19.687 03:31:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:19.687 03:31:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:19.687 03:31:12 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:19.687 03:31:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.687 03:31:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.687 03:31:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:19.687 03:31:12 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:19.687 03:31:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:19.952 No valid GPT data, bailing 00:04:19.952 03:31:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:19.952 03:31:12 -- scripts/common.sh@394 -- # pt= 00:04:19.952 03:31:12 -- scripts/common.sh@395 -- # return 1 00:04:19.952 03:31:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:19.952 1+0 records in 00:04:19.952 1+0 records out 00:04:19.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00566913 s, 185 MB/s 00:04:19.952 03:31:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.952 03:31:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.952 03:31:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:19.952 03:31:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:19.952 03:31:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:19.952 No valid GPT data, bailing 00:04:19.952 03:31:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:19.952 03:31:12 -- scripts/common.sh@394 -- # pt= 00:04:19.952 03:31:12 -- scripts/common.sh@395 -- # return 1 00:04:19.952 03:31:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:19.952 1+0 records in 00:04:19.952 1+0 records out 00:04:19.952 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00548868 s, 191 MB/s 00:04:19.952 03:31:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.952 03:31:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.952 03:31:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:19.952 03:31:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:19.952 03:31:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:19.952 No valid GPT data, bailing 00:04:19.952 03:31:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:19.953 03:31:12 -- scripts/common.sh@394 -- # pt= 00:04:19.953 03:31:12 -- scripts/common.sh@395 -- # return 1 00:04:19.953 03:31:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:19.953 1+0 records in 00:04:19.953 1+0 records out 00:04:19.953 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00512703 s, 205 MB/s 00:04:19.953 03:31:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:19.953 03:31:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:19.953 03:31:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:19.953 03:31:12 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:19.953 03:31:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:19.953 No valid GPT data, bailing 00:04:19.953 03:31:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:20.214 03:31:12 -- scripts/common.sh@394 -- # pt= 00:04:20.214 03:31:12 -- scripts/common.sh@395 -- # return 1 00:04:20.214 03:31:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:20.214 1+0 records in 00:04:20.214 1+0 records out 00:04:20.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540951 s, 194 MB/s 00:04:20.214 03:31:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.214 03:31:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.214 03:31:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:20.214 03:31:12 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:20.214 03:31:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:20.214 No valid GPT data, bailing 00:04:20.214 03:31:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:20.214 03:31:12 -- scripts/common.sh@394 -- # pt= 00:04:20.214 03:31:12 -- scripts/common.sh@395 -- # return 1 00:04:20.214 03:31:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:20.214 1+0 records in 00:04:20.214 1+0 records out 00:04:20.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262474 s, 39.9 MB/s 00:04:20.214 03:31:12 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:20.214 03:31:12 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:20.214 03:31:12 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:20.214 03:31:12 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:20.214 03:31:12 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:20.214 No valid GPT data, bailing 00:04:20.214 03:31:12 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:20.214 03:31:12 -- scripts/common.sh@394 -- # pt= 00:04:20.214 03:31:12 -- scripts/common.sh@395 -- # return 1 00:04:20.214 03:31:12 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:20.214 1+0 records in 00:04:20.214 1+0 records out 00:04:20.214 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00409594 s, 256 MB/s 00:04:20.214 03:31:12 -- spdk/autotest.sh@105 -- # sync 00:04:20.785 03:31:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:20.785 03:31:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:20.785 03:31:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:22.702 03:31:14 -- spdk/autotest.sh@111 -- # uname -s 00:04:22.702 03:31:14 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:22.702 03:31:14 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:22.702 03:31:14 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:22.965 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:23.225 Hugepages 00:04:23.225 node hugesize free / total 00:04:23.225 node0 1048576kB 0 / 0 00:04:23.225 node0 2048kB 0 / 0 00:04:23.225 00:04:23.225 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:23.225 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:23.486 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:04:23.486 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:23.487 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:23.487 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:23.487 03:31:16 -- spdk/autotest.sh@117 -- # uname -s 00:04:23.487 03:31:16 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:23.487 03:31:16 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:23.487 03:31:16 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:24.060 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:24.633 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.633 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.633 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.633 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:24.633 03:31:17 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:26.020 03:31:18 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:26.020 03:31:18 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:26.020 03:31:18 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:26.020 03:31:18 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:26.020 03:31:18 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:26.020 03:31:18 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:26.020 03:31:18 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:26.020 03:31:18 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:26.020 03:31:18 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:26.020 03:31:18 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:26.020 03:31:18 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:26.020 03:31:18 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:26.020 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:26.282 Waiting for block devices as requested 00:04:26.282 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.282 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.544 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:26.544 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:31.833 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:31.833 03:31:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:31.833 03:31:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:31.833 03:31:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.833 03:31:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:31.833 03:31:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:31.833 03:31:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:31.833 03:31:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:31.834 03:31:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:31.834 03:31:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:31.834 03:31:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:31.834 03:31:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:31.834 03:31:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:31.834 03:31:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1541 -- # continue 00:04:31.834 03:31:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:31.834 03:31:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:31.834 03:31:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.834 03:31:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:31.834 03:31:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:31.834 03:31:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:31.834 03:31:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:31.834 03:31:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:31.834 03:31:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:31.834 03:31:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:31.834 03:31:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:31.834 03:31:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1541 -- # continue 00:04:31.834 03:31:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:31.834 03:31:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:31.834 03:31:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.834 03:31:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:31.834 03:31:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:31.834 03:31:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:31.834 03:31:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:31.834 03:31:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1541 -- # continue 00:04:31.834 03:31:24 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:31.834 03:31:24 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:31.834 03:31:24 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:31.834 03:31:24 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:31.834 03:31:24 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:31.834 03:31:24 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:31.834 03:31:24 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:31.834 03:31:24 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:31.834 03:31:24 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:31.834 03:31:24 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:31.834 03:31:24 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:31.834 03:31:24 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:31.834 03:31:24 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:31.834 03:31:24 -- common/autotest_common.sh@1541 -- # continue 00:04:31.834 03:31:24 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:31.834 03:31:24 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:31.834 03:31:24 -- common/autotest_common.sh@10 -- # set +x 00:04:31.834 03:31:24 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:31.834 03:31:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:31.834 03:31:24 -- common/autotest_common.sh@10 -- # set +x 00:04:31.834 03:31:24 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:32.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:32.669 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.669 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.669 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.947 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:32.947 03:31:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:32.947 03:31:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:32.947 03:31:25 -- common/autotest_common.sh@10 -- # set +x 00:04:32.947 03:31:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:32.947 03:31:25 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:32.947 03:31:25 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:32.947 03:31:25 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:32.947 03:31:25 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:32.947 03:31:25 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:32.947 03:31:25 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:32.947 03:31:25 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:32.947 03:31:25 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:32.947 03:31:25 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:32.947 03:31:25 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:32.947 03:31:25 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:32.947 03:31:25 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:32.947 03:31:25 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:32.947 03:31:25 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:32.947 03:31:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:32.947 03:31:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:32.947 03:31:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:32.947 03:31:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.947 03:31:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:32.948 03:31:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:32.948 03:31:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:32.948 03:31:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.948 03:31:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:32.948 03:31:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:32.948 03:31:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:32.948 03:31:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.948 03:31:25 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:32.948 03:31:25 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:32.948 03:31:25 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:32.948 03:31:25 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:32.948 03:31:25 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:32.948 03:31:25 -- common/autotest_common.sh@1570 -- # return 0 00:04:32.948 03:31:25 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:32.948 03:31:25 -- common/autotest_common.sh@1578 -- # return 0 00:04:32.948 03:31:25 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:32.948 03:31:25 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:32.948 03:31:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.948 03:31:25 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:32.948 03:31:25 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:32.948 03:31:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:32.948 03:31:25 -- common/autotest_common.sh@10 -- # set +x 00:04:32.948 03:31:25 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:32.948 03:31:25 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:32.948 03:31:25 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:32.948 03:31:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:32.948 03:31:25 -- common/autotest_common.sh@10 -- # set +x 00:04:33.207 ************************************ 00:04:33.207 START TEST env 00:04:33.207 ************************************ 00:04:33.207 03:31:25 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:33.207 * Looking for test storage... 00:04:33.207 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:33.207 03:31:25 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:33.207 03:31:25 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:33.207 03:31:25 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:33.207 03:31:25 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:33.207 03:31:25 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:33.207 03:31:25 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:33.207 03:31:25 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:33.207 03:31:25 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:33.207 03:31:25 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:33.207 03:31:25 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:33.207 03:31:25 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:33.207 03:31:25 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:33.207 03:31:25 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:33.207 03:31:25 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:33.207 03:31:25 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:33.207 03:31:25 env -- scripts/common.sh@344 -- # case "$op" in 00:04:33.207 03:31:25 env -- scripts/common.sh@345 -- # : 1 00:04:33.207 03:31:25 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:33.207 03:31:25 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:33.207 03:31:25 env -- scripts/common.sh@365 -- # decimal 1 00:04:33.207 03:31:25 env -- scripts/common.sh@353 -- # local d=1 00:04:33.207 03:31:25 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:33.207 03:31:25 env -- scripts/common.sh@355 -- # echo 1 00:04:33.208 03:31:25 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:33.208 03:31:25 env -- scripts/common.sh@366 -- # decimal 2 00:04:33.208 03:31:25 env -- scripts/common.sh@353 -- # local d=2 00:04:33.208 03:31:25 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:33.208 03:31:25 env -- scripts/common.sh@355 -- # echo 2 00:04:33.208 03:31:25 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:33.208 03:31:25 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:33.208 03:31:25 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:33.208 03:31:25 env -- scripts/common.sh@368 -- # return 0 00:04:33.208 03:31:25 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:33.208 03:31:25 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:33.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.208 --rc genhtml_branch_coverage=1 00:04:33.208 --rc genhtml_function_coverage=1 00:04:33.208 --rc genhtml_legend=1 00:04:33.208 --rc geninfo_all_blocks=1 00:04:33.208 --rc geninfo_unexecuted_blocks=1 00:04:33.208 00:04:33.208 ' 00:04:33.208 03:31:25 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:33.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.208 --rc genhtml_branch_coverage=1 00:04:33.208 --rc genhtml_function_coverage=1 00:04:33.208 --rc genhtml_legend=1 00:04:33.208 --rc geninfo_all_blocks=1 00:04:33.208 --rc geninfo_unexecuted_blocks=1 00:04:33.208 00:04:33.208 ' 00:04:33.208 03:31:25 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:33.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.208 --rc genhtml_branch_coverage=1 00:04:33.208 --rc genhtml_function_coverage=1 00:04:33.208 --rc genhtml_legend=1 00:04:33.208 --rc geninfo_all_blocks=1 00:04:33.208 --rc geninfo_unexecuted_blocks=1 00:04:33.208 00:04:33.208 ' 00:04:33.208 03:31:25 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:33.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:33.208 --rc genhtml_branch_coverage=1 00:04:33.208 --rc genhtml_function_coverage=1 00:04:33.208 --rc genhtml_legend=1 00:04:33.208 --rc geninfo_all_blocks=1 00:04:33.208 --rc geninfo_unexecuted_blocks=1 00:04:33.208 00:04:33.208 ' 00:04:33.208 03:31:25 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:33.208 03:31:25 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.208 03:31:25 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.208 03:31:25 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.208 ************************************ 00:04:33.208 START TEST env_memory 00:04:33.208 ************************************ 00:04:33.208 03:31:25 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:33.208 00:04:33.208 00:04:33.208 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.208 http://cunit.sourceforge.net/ 00:04:33.208 00:04:33.208 00:04:33.208 Suite: memory 00:04:33.208 Test: alloc and free memory map ...[2024-10-01 03:31:25.736689] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:33.467 passed 00:04:33.467 Test: mem map translation ...[2024-10-01 03:31:25.775793] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:33.467 [2024-10-01 03:31:25.775920] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:33.467 [2024-10-01 03:31:25.776046] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:33.468 [2024-10-01 03:31:25.776086] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:33.468 passed 00:04:33.468 Test: mem map registration ...[2024-10-01 03:31:25.846281] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:33.468 [2024-10-01 03:31:25.846382] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:33.468 passed 00:04:33.468 Test: mem map adjacent registrations ...passed 00:04:33.468 00:04:33.468 Run Summary: Type Total Ran Passed Failed Inactive 00:04:33.468 suites 1 1 n/a 0 0 00:04:33.468 tests 4 4 4 0 0 00:04:33.468 asserts 152 152 152 0 n/a 00:04:33.468 00:04:33.468 Elapsed time = 0.237 seconds 00:04:33.468 00:04:33.468 real 0m0.271s 00:04:33.468 user 0m0.246s 00:04:33.468 sys 0m0.017s 00:04:33.468 03:31:25 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.468 03:31:25 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:33.468 ************************************ 00:04:33.468 END TEST env_memory 00:04:33.468 ************************************ 00:04:33.468 03:31:25 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:33.468 03:31:26 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.468 03:31:26 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.468 03:31:26 env -- common/autotest_common.sh@10 -- # set +x 00:04:33.468 ************************************ 00:04:33.468 START TEST env_vtophys 00:04:33.468 ************************************ 00:04:33.727 03:31:26 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:33.727 EAL: lib.eal log level changed from notice to debug 00:04:33.727 EAL: Detected lcore 0 as core 0 on socket 0 00:04:33.727 EAL: Detected lcore 1 as core 0 on socket 0 00:04:33.727 EAL: Detected lcore 2 as core 0 on socket 0 00:04:33.727 EAL: Detected lcore 3 as core 0 on socket 0 00:04:33.727 EAL: Detected lcore 4 as core 0 on socket 0 00:04:33.727 EAL: Detected lcore 5 as core 0 on socket 0 00:04:33.727 EAL: Detected lcore 6 as core 0 on socket 0 00:04:33.727 EAL: Detected lcore 7 as core 0 on socket 0 00:04:33.727 EAL: Detected lcore 8 as core 0 on socket 0 00:04:33.727 EAL: Detected lcore 9 as core 0 on socket 0 00:04:33.727 EAL: Maximum logical cores by configuration: 128 00:04:33.727 EAL: Detected CPU lcores: 10 00:04:33.727 EAL: Detected NUMA nodes: 1 00:04:33.727 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:33.727 EAL: Detected shared linkage of DPDK 00:04:33.727 EAL: No shared files mode enabled, IPC will be disabled 00:04:33.727 EAL: Selected IOVA mode 'PA' 00:04:33.727 EAL: Probing VFIO support... 00:04:33.727 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:33.727 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:33.727 EAL: Ask a virtual area of 0x2e000 bytes 00:04:33.727 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:33.727 EAL: Setting up physically contiguous memory... 00:04:33.727 EAL: Setting maximum number of open files to 524288 00:04:33.727 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:33.727 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:33.727 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.727 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:33.727 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.727 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.727 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:33.727 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:33.727 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.727 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:33.727 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.727 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.727 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:33.727 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:33.727 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.727 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:33.727 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.728 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.728 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:33.728 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:33.728 EAL: Ask a virtual area of 0x61000 bytes 00:04:33.728 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:33.728 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:33.728 EAL: Ask a virtual area of 0x400000000 bytes 00:04:33.728 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:33.728 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:33.728 EAL: Hugepages will be freed exactly as allocated. 00:04:33.728 EAL: No shared files mode enabled, IPC is disabled 00:04:33.728 EAL: No shared files mode enabled, IPC is disabled 00:04:33.728 EAL: TSC frequency is ~2600000 KHz 00:04:33.728 EAL: Main lcore 0 is ready (tid=7fd3350daa40;cpuset=[0]) 00:04:33.728 EAL: Trying to obtain current memory policy. 00:04:33.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:33.728 EAL: Restoring previous memory policy: 0 00:04:33.728 EAL: request: mp_malloc_sync 00:04:33.728 EAL: No shared files mode enabled, IPC is disabled 00:04:33.728 EAL: Heap on socket 0 was expanded by 2MB 00:04:33.728 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:33.728 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:33.728 EAL: Mem event callback 'spdk:(nil)' registered 00:04:33.728 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:33.728 00:04:33.728 00:04:33.728 CUnit - A unit testing framework for C - Version 2.1-3 00:04:33.728 http://cunit.sourceforge.net/ 00:04:33.728 00:04:33.728 00:04:33.728 Suite: components_suite 00:04:34.297 Test: vtophys_malloc_test ...passed 00:04:34.297 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:34.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.297 EAL: Restoring previous memory policy: 4 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was expanded by 4MB 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was shrunk by 4MB 00:04:34.297 EAL: Trying to obtain current memory policy. 00:04:34.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.297 EAL: Restoring previous memory policy: 4 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was expanded by 6MB 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was shrunk by 6MB 00:04:34.297 EAL: Trying to obtain current memory policy. 00:04:34.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.297 EAL: Restoring previous memory policy: 4 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was expanded by 10MB 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was shrunk by 10MB 00:04:34.297 EAL: Trying to obtain current memory policy. 00:04:34.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.297 EAL: Restoring previous memory policy: 4 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was expanded by 18MB 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was shrunk by 18MB 00:04:34.297 EAL: Trying to obtain current memory policy. 00:04:34.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.297 EAL: Restoring previous memory policy: 4 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was expanded by 34MB 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was shrunk by 34MB 00:04:34.297 EAL: Trying to obtain current memory policy. 00:04:34.297 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.297 EAL: Restoring previous memory policy: 4 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was expanded by 66MB 00:04:34.297 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.297 EAL: request: mp_malloc_sync 00:04:34.297 EAL: No shared files mode enabled, IPC is disabled 00:04:34.297 EAL: Heap on socket 0 was shrunk by 66MB 00:04:34.557 EAL: Trying to obtain current memory policy. 00:04:34.557 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.557 EAL: Restoring previous memory policy: 4 00:04:34.557 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.557 EAL: request: mp_malloc_sync 00:04:34.557 EAL: No shared files mode enabled, IPC is disabled 00:04:34.557 EAL: Heap on socket 0 was expanded by 130MB 00:04:34.557 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.557 EAL: request: mp_malloc_sync 00:04:34.557 EAL: No shared files mode enabled, IPC is disabled 00:04:34.557 EAL: Heap on socket 0 was shrunk by 130MB 00:04:34.816 EAL: Trying to obtain current memory policy. 00:04:34.816 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.816 EAL: Restoring previous memory policy: 4 00:04:34.816 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.816 EAL: request: mp_malloc_sync 00:04:34.816 EAL: No shared files mode enabled, IPC is disabled 00:04:34.816 EAL: Heap on socket 0 was expanded by 258MB 00:04:35.078 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.339 EAL: request: mp_malloc_sync 00:04:35.339 EAL: No shared files mode enabled, IPC is disabled 00:04:35.339 EAL: Heap on socket 0 was shrunk by 258MB 00:04:35.601 EAL: Trying to obtain current memory policy. 00:04:35.601 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.601 EAL: Restoring previous memory policy: 4 00:04:35.601 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.601 EAL: request: mp_malloc_sync 00:04:35.601 EAL: No shared files mode enabled, IPC is disabled 00:04:35.601 EAL: Heap on socket 0 was expanded by 514MB 00:04:36.173 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.173 EAL: request: mp_malloc_sync 00:04:36.173 EAL: No shared files mode enabled, IPC is disabled 00:04:36.173 EAL: Heap on socket 0 was shrunk by 514MB 00:04:36.743 EAL: Trying to obtain current memory policy. 00:04:36.743 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:36.743 EAL: Restoring previous memory policy: 4 00:04:36.743 EAL: Calling mem event callback 'spdk:(nil)' 00:04:36.743 EAL: request: mp_malloc_sync 00:04:36.743 EAL: No shared files mode enabled, IPC is disabled 00:04:36.743 EAL: Heap on socket 0 was expanded by 1026MB 00:04:38.125 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.125 EAL: request: mp_malloc_sync 00:04:38.125 EAL: No shared files mode enabled, IPC is disabled 00:04:38.125 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:38.692 passed 00:04:38.692 00:04:38.692 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.692 suites 1 1 n/a 0 0 00:04:38.693 tests 2 2 2 0 0 00:04:38.693 asserts 5698 5698 5698 0 n/a 00:04:38.693 00:04:38.693 Elapsed time = 4.990 seconds 00:04:38.693 EAL: Calling mem event callback 'spdk:(nil)' 00:04:38.953 EAL: request: mp_malloc_sync 00:04:38.953 EAL: No shared files mode enabled, IPC is disabled 00:04:38.953 EAL: Heap on socket 0 was shrunk by 2MB 00:04:38.953 EAL: No shared files mode enabled, IPC is disabled 00:04:38.953 EAL: No shared files mode enabled, IPC is disabled 00:04:38.953 EAL: No shared files mode enabled, IPC is disabled 00:04:38.953 ************************************ 00:04:38.953 END TEST env_vtophys 00:04:38.953 ************************************ 00:04:38.953 00:04:38.953 real 0m5.252s 00:04:38.953 user 0m4.361s 00:04:38.953 sys 0m0.736s 00:04:38.953 03:31:31 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.953 03:31:31 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:38.953 03:31:31 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.953 03:31:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:38.953 03:31:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.953 03:31:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.953 ************************************ 00:04:38.953 START TEST env_pci 00:04:38.953 ************************************ 00:04:38.953 03:31:31 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:38.953 00:04:38.953 00:04:38.953 CUnit - A unit testing framework for C - Version 2.1-3 00:04:38.953 http://cunit.sourceforge.net/ 00:04:38.953 00:04:38.953 00:04:38.953 Suite: pci 00:04:38.953 Test: pci_hook ...[2024-10-01 03:31:31.353824] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57394 has claimed it 00:04:38.953 passed 00:04:38.953 00:04:38.953 Run Summary: Type Total Ran Passed Failed Inactive 00:04:38.953 suites 1 1 n/a 0 0 00:04:38.953 tests 1 1 1 0 0 00:04:38.953 asserts 25 25 25 0 n/a 00:04:38.953 00:04:38.953 Elapsed time = 0.007 seconds 00:04:38.953 EAL: Cannot find device (10000:00:01.0) 00:04:38.953 EAL: Failed to attach device on primary process 00:04:38.953 ************************************ 00:04:38.953 END TEST env_pci 00:04:38.953 ************************************ 00:04:38.953 00:04:38.953 real 0m0.062s 00:04:38.953 user 0m0.026s 00:04:38.953 sys 0m0.035s 00:04:38.953 03:31:31 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:38.953 03:31:31 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:38.953 03:31:31 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:38.953 03:31:31 env -- env/env.sh@15 -- # uname 00:04:38.953 03:31:31 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:38.953 03:31:31 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:38.953 03:31:31 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.953 03:31:31 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:38.953 03:31:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:38.953 03:31:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:38.953 ************************************ 00:04:38.953 START TEST env_dpdk_post_init 00:04:38.953 ************************************ 00:04:38.953 03:31:31 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:38.953 EAL: Detected CPU lcores: 10 00:04:38.953 EAL: Detected NUMA nodes: 1 00:04:38.953 EAL: Detected shared linkage of DPDK 00:04:39.215 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.215 EAL: Selected IOVA mode 'PA' 00:04:39.215 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.215 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:39.215 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:39.215 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:39.215 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:39.215 Starting DPDK initialization... 00:04:39.215 Starting SPDK post initialization... 00:04:39.215 SPDK NVMe probe 00:04:39.215 Attaching to 0000:00:10.0 00:04:39.215 Attaching to 0000:00:11.0 00:04:39.215 Attaching to 0000:00:12.0 00:04:39.215 Attaching to 0000:00:13.0 00:04:39.215 Attached to 0000:00:11.0 00:04:39.215 Attached to 0000:00:13.0 00:04:39.215 Attached to 0000:00:10.0 00:04:39.215 Attached to 0000:00:12.0 00:04:39.215 Cleaning up... 00:04:39.215 00:04:39.215 real 0m0.240s 00:04:39.215 user 0m0.072s 00:04:39.215 sys 0m0.070s 00:04:39.215 03:31:31 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.215 03:31:31 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:39.215 ************************************ 00:04:39.215 END TEST env_dpdk_post_init 00:04:39.215 ************************************ 00:04:39.215 03:31:31 env -- env/env.sh@26 -- # uname 00:04:39.215 03:31:31 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:39.215 03:31:31 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.215 03:31:31 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.215 03:31:31 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.215 03:31:31 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.215 ************************************ 00:04:39.215 START TEST env_mem_callbacks 00:04:39.215 ************************************ 00:04:39.215 03:31:31 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:39.476 EAL: Detected CPU lcores: 10 00:04:39.476 EAL: Detected NUMA nodes: 1 00:04:39.476 EAL: Detected shared linkage of DPDK 00:04:39.476 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:39.476 EAL: Selected IOVA mode 'PA' 00:04:39.476 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:39.476 00:04:39.476 00:04:39.476 CUnit - A unit testing framework for C - Version 2.1-3 00:04:39.476 http://cunit.sourceforge.net/ 00:04:39.476 00:04:39.476 00:04:39.476 Suite: memory 00:04:39.476 Test: test ... 00:04:39.476 register 0x200000200000 2097152 00:04:39.476 malloc 3145728 00:04:39.476 register 0x200000400000 4194304 00:04:39.476 buf 0x2000004fffc0 len 3145728 PASSED 00:04:39.476 malloc 64 00:04:39.476 buf 0x2000004ffec0 len 64 PASSED 00:04:39.476 malloc 4194304 00:04:39.476 register 0x200000800000 6291456 00:04:39.476 buf 0x2000009fffc0 len 4194304 PASSED 00:04:39.476 free 0x2000004fffc0 3145728 00:04:39.476 free 0x2000004ffec0 64 00:04:39.476 unregister 0x200000400000 4194304 PASSED 00:04:39.476 free 0x2000009fffc0 4194304 00:04:39.476 unregister 0x200000800000 6291456 PASSED 00:04:39.476 malloc 8388608 00:04:39.476 register 0x200000400000 10485760 00:04:39.476 buf 0x2000005fffc0 len 8388608 PASSED 00:04:39.476 free 0x2000005fffc0 8388608 00:04:39.476 unregister 0x200000400000 10485760 PASSED 00:04:39.476 passed 00:04:39.476 00:04:39.476 Run Summary: Type Total Ran Passed Failed Inactive 00:04:39.476 suites 1 1 n/a 0 0 00:04:39.476 tests 1 1 1 0 0 00:04:39.476 asserts 15 15 15 0 n/a 00:04:39.476 00:04:39.476 Elapsed time = 0.050 seconds 00:04:39.476 00:04:39.476 real 0m0.229s 00:04:39.476 user 0m0.064s 00:04:39.476 sys 0m0.060s 00:04:39.476 03:31:31 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.476 03:31:31 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:39.476 ************************************ 00:04:39.476 END TEST env_mem_callbacks 00:04:39.476 ************************************ 00:04:39.736 ************************************ 00:04:39.736 END TEST env 00:04:39.736 ************************************ 00:04:39.736 00:04:39.736 real 0m6.546s 00:04:39.736 user 0m4.936s 00:04:39.736 sys 0m1.155s 00:04:39.736 03:31:32 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.736 03:31:32 env -- common/autotest_common.sh@10 -- # set +x 00:04:39.736 03:31:32 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:39.736 03:31:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.736 03:31:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.736 03:31:32 -- common/autotest_common.sh@10 -- # set +x 00:04:39.736 ************************************ 00:04:39.736 START TEST rpc 00:04:39.736 ************************************ 00:04:39.736 03:31:32 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:39.736 * Looking for test storage... 00:04:39.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:39.737 03:31:32 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.737 03:31:32 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.737 03:31:32 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.737 03:31:32 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.737 03:31:32 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.737 03:31:32 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.737 03:31:32 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.737 03:31:32 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.737 03:31:32 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.737 03:31:32 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.737 03:31:32 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.737 03:31:32 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:39.737 03:31:32 rpc -- scripts/common.sh@345 -- # : 1 00:04:39.737 03:31:32 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.737 03:31:32 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.737 03:31:32 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:39.737 03:31:32 rpc -- scripts/common.sh@353 -- # local d=1 00:04:39.737 03:31:32 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.737 03:31:32 rpc -- scripts/common.sh@355 -- # echo 1 00:04:39.737 03:31:32 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.737 03:31:32 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:39.737 03:31:32 rpc -- scripts/common.sh@353 -- # local d=2 00:04:39.737 03:31:32 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.737 03:31:32 rpc -- scripts/common.sh@355 -- # echo 2 00:04:39.737 03:31:32 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.737 03:31:32 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.737 03:31:32 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.737 03:31:32 rpc -- scripts/common.sh@368 -- # return 0 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:39.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.737 --rc genhtml_branch_coverage=1 00:04:39.737 --rc genhtml_function_coverage=1 00:04:39.737 --rc genhtml_legend=1 00:04:39.737 --rc geninfo_all_blocks=1 00:04:39.737 --rc geninfo_unexecuted_blocks=1 00:04:39.737 00:04:39.737 ' 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:39.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.737 --rc genhtml_branch_coverage=1 00:04:39.737 --rc genhtml_function_coverage=1 00:04:39.737 --rc genhtml_legend=1 00:04:39.737 --rc geninfo_all_blocks=1 00:04:39.737 --rc geninfo_unexecuted_blocks=1 00:04:39.737 00:04:39.737 ' 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:39.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.737 --rc genhtml_branch_coverage=1 00:04:39.737 --rc genhtml_function_coverage=1 00:04:39.737 --rc genhtml_legend=1 00:04:39.737 --rc geninfo_all_blocks=1 00:04:39.737 --rc geninfo_unexecuted_blocks=1 00:04:39.737 00:04:39.737 ' 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:39.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.737 --rc genhtml_branch_coverage=1 00:04:39.737 --rc genhtml_function_coverage=1 00:04:39.737 --rc genhtml_legend=1 00:04:39.737 --rc geninfo_all_blocks=1 00:04:39.737 --rc geninfo_unexecuted_blocks=1 00:04:39.737 00:04:39.737 ' 00:04:39.737 03:31:32 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57521 00:04:39.737 03:31:32 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:39.737 03:31:32 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57521 00:04:39.737 03:31:32 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:39.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@831 -- # '[' -z 57521 ']' 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.737 03:31:32 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:39.997 [2024-10-01 03:31:32.372367] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:39.997 [2024-10-01 03:31:32.372750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57521 ] 00:04:39.997 [2024-10-01 03:31:32.527363] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.258 [2024-10-01 03:31:32.768243] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:40.258 [2024-10-01 03:31:32.768313] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57521' to capture a snapshot of events at runtime. 00:04:40.258 [2024-10-01 03:31:32.768325] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:40.258 [2024-10-01 03:31:32.768336] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:40.258 [2024-10-01 03:31:32.768345] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57521 for offline analysis/debug. 00:04:40.258 [2024-10-01 03:31:32.768393] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.198 03:31:33 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.198 03:31:33 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:41.198 03:31:33 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.198 03:31:33 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:41.198 03:31:33 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:41.198 03:31:33 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:41.198 03:31:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.198 03:31:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.198 03:31:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.198 ************************************ 00:04:41.198 START TEST rpc_integrity 00:04:41.198 ************************************ 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.198 { 00:04:41.198 "name": "Malloc0", 00:04:41.198 "aliases": [ 00:04:41.198 "721675a4-fc69-4b17-994a-ba1d21775a6b" 00:04:41.198 ], 00:04:41.198 "product_name": "Malloc disk", 00:04:41.198 "block_size": 512, 00:04:41.198 "num_blocks": 16384, 00:04:41.198 "uuid": "721675a4-fc69-4b17-994a-ba1d21775a6b", 00:04:41.198 "assigned_rate_limits": { 00:04:41.198 "rw_ios_per_sec": 0, 00:04:41.198 "rw_mbytes_per_sec": 0, 00:04:41.198 "r_mbytes_per_sec": 0, 00:04:41.198 "w_mbytes_per_sec": 0 00:04:41.198 }, 00:04:41.198 "claimed": false, 00:04:41.198 "zoned": false, 00:04:41.198 "supported_io_types": { 00:04:41.198 "read": true, 00:04:41.198 "write": true, 00:04:41.198 "unmap": true, 00:04:41.198 "flush": true, 00:04:41.198 "reset": true, 00:04:41.198 "nvme_admin": false, 00:04:41.198 "nvme_io": false, 00:04:41.198 "nvme_io_md": false, 00:04:41.198 "write_zeroes": true, 00:04:41.198 "zcopy": true, 00:04:41.198 "get_zone_info": false, 00:04:41.198 "zone_management": false, 00:04:41.198 "zone_append": false, 00:04:41.198 "compare": false, 00:04:41.198 "compare_and_write": false, 00:04:41.198 "abort": true, 00:04:41.198 "seek_hole": false, 00:04:41.198 "seek_data": false, 00:04:41.198 "copy": true, 00:04:41.198 "nvme_iov_md": false 00:04:41.198 }, 00:04:41.198 "memory_domains": [ 00:04:41.198 { 00:04:41.198 "dma_device_id": "system", 00:04:41.198 "dma_device_type": 1 00:04:41.198 }, 00:04:41.198 { 00:04:41.198 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.198 "dma_device_type": 2 00:04:41.198 } 00:04:41.198 ], 00:04:41.198 "driver_specific": {} 00:04:41.198 } 00:04:41.198 ]' 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.198 [2024-10-01 03:31:33.538692] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:41.198 [2024-10-01 03:31:33.538750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.198 [2024-10-01 03:31:33.538773] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:41.198 [2024-10-01 03:31:33.538785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.198 [2024-10-01 03:31:33.541035] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.198 [2024-10-01 03:31:33.541178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.198 Passthru0 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.198 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.198 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.198 { 00:04:41.198 "name": "Malloc0", 00:04:41.198 "aliases": [ 00:04:41.198 "721675a4-fc69-4b17-994a-ba1d21775a6b" 00:04:41.198 ], 00:04:41.198 "product_name": "Malloc disk", 00:04:41.198 "block_size": 512, 00:04:41.198 "num_blocks": 16384, 00:04:41.198 "uuid": "721675a4-fc69-4b17-994a-ba1d21775a6b", 00:04:41.198 "assigned_rate_limits": { 00:04:41.198 "rw_ios_per_sec": 0, 00:04:41.198 "rw_mbytes_per_sec": 0, 00:04:41.198 "r_mbytes_per_sec": 0, 00:04:41.198 "w_mbytes_per_sec": 0 00:04:41.198 }, 00:04:41.198 "claimed": true, 00:04:41.198 "claim_type": "exclusive_write", 00:04:41.198 "zoned": false, 00:04:41.198 "supported_io_types": { 00:04:41.198 "read": true, 00:04:41.198 "write": true, 00:04:41.198 "unmap": true, 00:04:41.198 "flush": true, 00:04:41.198 "reset": true, 00:04:41.198 "nvme_admin": false, 00:04:41.198 "nvme_io": false, 00:04:41.198 "nvme_io_md": false, 00:04:41.198 "write_zeroes": true, 00:04:41.198 "zcopy": true, 00:04:41.198 "get_zone_info": false, 00:04:41.198 "zone_management": false, 00:04:41.198 "zone_append": false, 00:04:41.198 "compare": false, 00:04:41.198 "compare_and_write": false, 00:04:41.199 "abort": true, 00:04:41.199 "seek_hole": false, 00:04:41.199 "seek_data": false, 00:04:41.199 "copy": true, 00:04:41.199 "nvme_iov_md": false 00:04:41.199 }, 00:04:41.199 "memory_domains": [ 00:04:41.199 { 00:04:41.199 "dma_device_id": "system", 00:04:41.199 "dma_device_type": 1 00:04:41.199 }, 00:04:41.199 { 00:04:41.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.199 "dma_device_type": 2 00:04:41.199 } 00:04:41.199 ], 00:04:41.199 "driver_specific": {} 00:04:41.199 }, 00:04:41.199 { 00:04:41.199 "name": "Passthru0", 00:04:41.199 "aliases": [ 00:04:41.199 "054dcd4c-2cc4-5851-a7ca-7bd1a566b224" 00:04:41.199 ], 00:04:41.199 "product_name": "passthru", 00:04:41.199 "block_size": 512, 00:04:41.199 "num_blocks": 16384, 00:04:41.199 "uuid": "054dcd4c-2cc4-5851-a7ca-7bd1a566b224", 00:04:41.199 "assigned_rate_limits": { 00:04:41.199 "rw_ios_per_sec": 0, 00:04:41.199 "rw_mbytes_per_sec": 0, 00:04:41.199 "r_mbytes_per_sec": 0, 00:04:41.199 "w_mbytes_per_sec": 0 00:04:41.199 }, 00:04:41.199 "claimed": false, 00:04:41.199 "zoned": false, 00:04:41.199 "supported_io_types": { 00:04:41.199 "read": true, 00:04:41.199 "write": true, 00:04:41.199 "unmap": true, 00:04:41.199 "flush": true, 00:04:41.199 "reset": true, 00:04:41.199 "nvme_admin": false, 00:04:41.199 "nvme_io": false, 00:04:41.199 "nvme_io_md": false, 00:04:41.199 "write_zeroes": true, 00:04:41.199 "zcopy": true, 00:04:41.199 "get_zone_info": false, 00:04:41.199 "zone_management": false, 00:04:41.199 "zone_append": false, 00:04:41.199 "compare": false, 00:04:41.199 "compare_and_write": false, 00:04:41.199 "abort": true, 00:04:41.199 "seek_hole": false, 00:04:41.199 "seek_data": false, 00:04:41.199 "copy": true, 00:04:41.199 "nvme_iov_md": false 00:04:41.199 }, 00:04:41.199 "memory_domains": [ 00:04:41.199 { 00:04:41.199 "dma_device_id": "system", 00:04:41.199 "dma_device_type": 1 00:04:41.199 }, 00:04:41.199 { 00:04:41.199 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.199 "dma_device_type": 2 00:04:41.199 } 00:04:41.199 ], 00:04:41.199 "driver_specific": { 00:04:41.199 "passthru": { 00:04:41.199 "name": "Passthru0", 00:04:41.199 "base_bdev_name": "Malloc0" 00:04:41.199 } 00:04:41.199 } 00:04:41.199 } 00:04:41.199 ]' 00:04:41.199 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:41.199 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.199 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.199 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.199 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.199 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.199 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:41.199 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.199 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.199 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.199 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.199 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.199 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.199 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.199 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.199 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:41.199 ************************************ 00:04:41.199 END TEST rpc_integrity 00:04:41.199 ************************************ 00:04:41.199 03:31:33 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.199 00:04:41.199 real 0m0.254s 00:04:41.199 user 0m0.137s 00:04:41.199 sys 0m0.029s 00:04:41.199 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.199 03:31:33 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.199 03:31:33 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:41.199 03:31:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.199 03:31:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.199 03:31:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.199 ************************************ 00:04:41.199 START TEST rpc_plugins 00:04:41.199 ************************************ 00:04:41.199 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:41.199 03:31:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:41.199 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.199 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.199 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.459 03:31:33 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:41.459 03:31:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:41.459 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.459 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.459 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.459 03:31:33 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:41.459 { 00:04:41.459 "name": "Malloc1", 00:04:41.459 "aliases": [ 00:04:41.459 "be17c916-af98-4c3e-8b29-60cba517f0fe" 00:04:41.459 ], 00:04:41.459 "product_name": "Malloc disk", 00:04:41.459 "block_size": 4096, 00:04:41.459 "num_blocks": 256, 00:04:41.459 "uuid": "be17c916-af98-4c3e-8b29-60cba517f0fe", 00:04:41.459 "assigned_rate_limits": { 00:04:41.459 "rw_ios_per_sec": 0, 00:04:41.459 "rw_mbytes_per_sec": 0, 00:04:41.459 "r_mbytes_per_sec": 0, 00:04:41.459 "w_mbytes_per_sec": 0 00:04:41.459 }, 00:04:41.459 "claimed": false, 00:04:41.459 "zoned": false, 00:04:41.459 "supported_io_types": { 00:04:41.459 "read": true, 00:04:41.459 "write": true, 00:04:41.459 "unmap": true, 00:04:41.459 "flush": true, 00:04:41.459 "reset": true, 00:04:41.459 "nvme_admin": false, 00:04:41.459 "nvme_io": false, 00:04:41.459 "nvme_io_md": false, 00:04:41.459 "write_zeroes": true, 00:04:41.459 "zcopy": true, 00:04:41.459 "get_zone_info": false, 00:04:41.459 "zone_management": false, 00:04:41.459 "zone_append": false, 00:04:41.459 "compare": false, 00:04:41.459 "compare_and_write": false, 00:04:41.459 "abort": true, 00:04:41.459 "seek_hole": false, 00:04:41.459 "seek_data": false, 00:04:41.459 "copy": true, 00:04:41.459 "nvme_iov_md": false 00:04:41.459 }, 00:04:41.459 "memory_domains": [ 00:04:41.459 { 00:04:41.459 "dma_device_id": "system", 00:04:41.459 "dma_device_type": 1 00:04:41.459 }, 00:04:41.459 { 00:04:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.459 "dma_device_type": 2 00:04:41.459 } 00:04:41.459 ], 00:04:41.459 "driver_specific": {} 00:04:41.459 } 00:04:41.459 ]' 00:04:41.459 03:31:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:41.459 03:31:33 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:41.459 03:31:33 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:41.459 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.459 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.459 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.459 03:31:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:41.459 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.459 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.459 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.459 03:31:33 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:41.459 03:31:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:41.459 ************************************ 00:04:41.459 END TEST rpc_plugins 00:04:41.459 ************************************ 00:04:41.459 03:31:33 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:41.459 00:04:41.459 real 0m0.108s 00:04:41.459 user 0m0.056s 00:04:41.459 sys 0m0.021s 00:04:41.459 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.459 03:31:33 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.459 03:31:33 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:41.459 03:31:33 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.459 03:31:33 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.459 03:31:33 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.459 ************************************ 00:04:41.459 START TEST rpc_trace_cmd_test 00:04:41.459 ************************************ 00:04:41.459 03:31:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:41.459 03:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:41.459 03:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:41.459 03:31:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.459 03:31:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.459 03:31:33 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.459 03:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:41.459 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57521", 00:04:41.459 "tpoint_group_mask": "0x8", 00:04:41.459 "iscsi_conn": { 00:04:41.460 "mask": "0x2", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "scsi": { 00:04:41.460 "mask": "0x4", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "bdev": { 00:04:41.460 "mask": "0x8", 00:04:41.460 "tpoint_mask": "0xffffffffffffffff" 00:04:41.460 }, 00:04:41.460 "nvmf_rdma": { 00:04:41.460 "mask": "0x10", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "nvmf_tcp": { 00:04:41.460 "mask": "0x20", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "ftl": { 00:04:41.460 "mask": "0x40", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "blobfs": { 00:04:41.460 "mask": "0x80", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "dsa": { 00:04:41.460 "mask": "0x200", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "thread": { 00:04:41.460 "mask": "0x400", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "nvme_pcie": { 00:04:41.460 "mask": "0x800", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "iaa": { 00:04:41.460 "mask": "0x1000", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "nvme_tcp": { 00:04:41.460 "mask": "0x2000", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "bdev_nvme": { 00:04:41.460 "mask": "0x4000", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "sock": { 00:04:41.460 "mask": "0x8000", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "blob": { 00:04:41.460 "mask": "0x10000", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 }, 00:04:41.460 "bdev_raid": { 00:04:41.460 "mask": "0x20000", 00:04:41.460 "tpoint_mask": "0x0" 00:04:41.460 } 00:04:41.460 }' 00:04:41.460 03:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:41.460 03:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 18 -gt 2 ']' 00:04:41.460 03:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:41.460 03:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:41.460 03:31:33 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:41.721 03:31:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:41.721 03:31:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:41.721 03:31:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:41.721 03:31:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:41.721 ************************************ 00:04:41.721 END TEST rpc_trace_cmd_test 00:04:41.721 ************************************ 00:04:41.721 03:31:34 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:41.721 00:04:41.721 real 0m0.176s 00:04:41.721 user 0m0.147s 00:04:41.721 sys 0m0.020s 00:04:41.721 03:31:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.721 03:31:34 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.721 03:31:34 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:41.721 03:31:34 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:41.721 03:31:34 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:41.721 03:31:34 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.721 03:31:34 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.721 03:31:34 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.721 ************************************ 00:04:41.721 START TEST rpc_daemon_integrity 00:04:41.721 ************************************ 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.721 { 00:04:41.721 "name": "Malloc2", 00:04:41.721 "aliases": [ 00:04:41.721 "0cf543e5-eacf-4910-8b70-adcd592e3d0e" 00:04:41.721 ], 00:04:41.721 "product_name": "Malloc disk", 00:04:41.721 "block_size": 512, 00:04:41.721 "num_blocks": 16384, 00:04:41.721 "uuid": "0cf543e5-eacf-4910-8b70-adcd592e3d0e", 00:04:41.721 "assigned_rate_limits": { 00:04:41.721 "rw_ios_per_sec": 0, 00:04:41.721 "rw_mbytes_per_sec": 0, 00:04:41.721 "r_mbytes_per_sec": 0, 00:04:41.721 "w_mbytes_per_sec": 0 00:04:41.721 }, 00:04:41.721 "claimed": false, 00:04:41.721 "zoned": false, 00:04:41.721 "supported_io_types": { 00:04:41.721 "read": true, 00:04:41.721 "write": true, 00:04:41.721 "unmap": true, 00:04:41.721 "flush": true, 00:04:41.721 "reset": true, 00:04:41.721 "nvme_admin": false, 00:04:41.721 "nvme_io": false, 00:04:41.721 "nvme_io_md": false, 00:04:41.721 "write_zeroes": true, 00:04:41.721 "zcopy": true, 00:04:41.721 "get_zone_info": false, 00:04:41.721 "zone_management": false, 00:04:41.721 "zone_append": false, 00:04:41.721 "compare": false, 00:04:41.721 "compare_and_write": false, 00:04:41.721 "abort": true, 00:04:41.721 "seek_hole": false, 00:04:41.721 "seek_data": false, 00:04:41.721 "copy": true, 00:04:41.721 "nvme_iov_md": false 00:04:41.721 }, 00:04:41.721 "memory_domains": [ 00:04:41.721 { 00:04:41.721 "dma_device_id": "system", 00:04:41.721 "dma_device_type": 1 00:04:41.721 }, 00:04:41.721 { 00:04:41.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.721 "dma_device_type": 2 00:04:41.721 } 00:04:41.721 ], 00:04:41.721 "driver_specific": {} 00:04:41.721 } 00:04:41.721 ]' 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.721 [2024-10-01 03:31:34.245526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:41.721 [2024-10-01 03:31:34.245575] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.721 [2024-10-01 03:31:34.245593] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:41.721 [2024-10-01 03:31:34.245604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.721 [2024-10-01 03:31:34.247708] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.721 [2024-10-01 03:31:34.247746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.721 Passthru0 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.721 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.721 { 00:04:41.721 "name": "Malloc2", 00:04:41.721 "aliases": [ 00:04:41.721 "0cf543e5-eacf-4910-8b70-adcd592e3d0e" 00:04:41.721 ], 00:04:41.721 "product_name": "Malloc disk", 00:04:41.721 "block_size": 512, 00:04:41.721 "num_blocks": 16384, 00:04:41.721 "uuid": "0cf543e5-eacf-4910-8b70-adcd592e3d0e", 00:04:41.721 "assigned_rate_limits": { 00:04:41.721 "rw_ios_per_sec": 0, 00:04:41.721 "rw_mbytes_per_sec": 0, 00:04:41.721 "r_mbytes_per_sec": 0, 00:04:41.721 "w_mbytes_per_sec": 0 00:04:41.721 }, 00:04:41.721 "claimed": true, 00:04:41.721 "claim_type": "exclusive_write", 00:04:41.721 "zoned": false, 00:04:41.721 "supported_io_types": { 00:04:41.721 "read": true, 00:04:41.721 "write": true, 00:04:41.721 "unmap": true, 00:04:41.721 "flush": true, 00:04:41.721 "reset": true, 00:04:41.721 "nvme_admin": false, 00:04:41.721 "nvme_io": false, 00:04:41.721 "nvme_io_md": false, 00:04:41.721 "write_zeroes": true, 00:04:41.721 "zcopy": true, 00:04:41.721 "get_zone_info": false, 00:04:41.721 "zone_management": false, 00:04:41.721 "zone_append": false, 00:04:41.721 "compare": false, 00:04:41.721 "compare_and_write": false, 00:04:41.721 "abort": true, 00:04:41.721 "seek_hole": false, 00:04:41.721 "seek_data": false, 00:04:41.721 "copy": true, 00:04:41.721 "nvme_iov_md": false 00:04:41.721 }, 00:04:41.721 "memory_domains": [ 00:04:41.721 { 00:04:41.721 "dma_device_id": "system", 00:04:41.721 "dma_device_type": 1 00:04:41.721 }, 00:04:41.721 { 00:04:41.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.721 "dma_device_type": 2 00:04:41.721 } 00:04:41.721 ], 00:04:41.721 "driver_specific": {} 00:04:41.721 }, 00:04:41.721 { 00:04:41.721 "name": "Passthru0", 00:04:41.721 "aliases": [ 00:04:41.721 "1a34fb60-ee44-5212-8188-8f3cbc5bd493" 00:04:41.721 ], 00:04:41.721 "product_name": "passthru", 00:04:41.721 "block_size": 512, 00:04:41.721 "num_blocks": 16384, 00:04:41.721 "uuid": "1a34fb60-ee44-5212-8188-8f3cbc5bd493", 00:04:41.721 "assigned_rate_limits": { 00:04:41.721 "rw_ios_per_sec": 0, 00:04:41.721 "rw_mbytes_per_sec": 0, 00:04:41.721 "r_mbytes_per_sec": 0, 00:04:41.721 "w_mbytes_per_sec": 0 00:04:41.721 }, 00:04:41.721 "claimed": false, 00:04:41.721 "zoned": false, 00:04:41.721 "supported_io_types": { 00:04:41.721 "read": true, 00:04:41.721 "write": true, 00:04:41.721 "unmap": true, 00:04:41.721 "flush": true, 00:04:41.721 "reset": true, 00:04:41.721 "nvme_admin": false, 00:04:41.721 "nvme_io": false, 00:04:41.721 "nvme_io_md": false, 00:04:41.721 "write_zeroes": true, 00:04:41.721 "zcopy": true, 00:04:41.721 "get_zone_info": false, 00:04:41.721 "zone_management": false, 00:04:41.721 "zone_append": false, 00:04:41.721 "compare": false, 00:04:41.721 "compare_and_write": false, 00:04:41.721 "abort": true, 00:04:41.721 "seek_hole": false, 00:04:41.721 "seek_data": false, 00:04:41.721 "copy": true, 00:04:41.721 "nvme_iov_md": false 00:04:41.721 }, 00:04:41.721 "memory_domains": [ 00:04:41.721 { 00:04:41.721 "dma_device_id": "system", 00:04:41.721 "dma_device_type": 1 00:04:41.721 }, 00:04:41.721 { 00:04:41.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.721 "dma_device_type": 2 00:04:41.721 } 00:04:41.721 ], 00:04:41.722 "driver_specific": { 00:04:41.722 "passthru": { 00:04:41.722 "name": "Passthru0", 00:04:41.722 "base_bdev_name": "Malloc2" 00:04:41.722 } 00:04:41.722 } 00:04:41.722 } 00:04:41.722 ]' 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:41.981 ************************************ 00:04:41.981 END TEST rpc_daemon_integrity 00:04:41.981 ************************************ 00:04:41.981 03:31:34 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.981 00:04:41.981 real 0m0.243s 00:04:41.981 user 0m0.130s 00:04:41.981 sys 0m0.027s 00:04:41.982 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.982 03:31:34 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.982 03:31:34 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:41.982 03:31:34 rpc -- rpc/rpc.sh@84 -- # killprocess 57521 00:04:41.982 03:31:34 rpc -- common/autotest_common.sh@950 -- # '[' -z 57521 ']' 00:04:41.982 03:31:34 rpc -- common/autotest_common.sh@954 -- # kill -0 57521 00:04:41.982 03:31:34 rpc -- common/autotest_common.sh@955 -- # uname 00:04:41.982 03:31:34 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:41.982 03:31:34 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57521 00:04:41.982 killing process with pid 57521 00:04:41.982 03:31:34 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:41.982 03:31:34 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:41.982 03:31:34 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57521' 00:04:41.982 03:31:34 rpc -- common/autotest_common.sh@969 -- # kill 57521 00:04:41.982 03:31:34 rpc -- common/autotest_common.sh@974 -- # wait 57521 00:04:43.892 00:04:43.892 real 0m3.923s 00:04:43.892 user 0m4.275s 00:04:43.892 sys 0m0.693s 00:04:43.892 03:31:36 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:43.892 ************************************ 00:04:43.892 END TEST rpc 00:04:43.892 ************************************ 00:04:43.892 03:31:36 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.892 03:31:36 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:43.892 03:31:36 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.892 03:31:36 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.892 03:31:36 -- common/autotest_common.sh@10 -- # set +x 00:04:43.892 ************************************ 00:04:43.892 START TEST skip_rpc 00:04:43.892 ************************************ 00:04:43.892 03:31:36 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:43.892 * Looking for test storage... 00:04:43.892 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:43.892 03:31:36 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:43.892 03:31:36 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:43.892 03:31:36 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:43.892 03:31:36 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.892 03:31:36 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:43.892 03:31:36 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.892 03:31:36 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:43.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.892 --rc genhtml_branch_coverage=1 00:04:43.892 --rc genhtml_function_coverage=1 00:04:43.892 --rc genhtml_legend=1 00:04:43.892 --rc geninfo_all_blocks=1 00:04:43.892 --rc geninfo_unexecuted_blocks=1 00:04:43.892 00:04:43.892 ' 00:04:43.892 03:31:36 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:43.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.892 --rc genhtml_branch_coverage=1 00:04:43.892 --rc genhtml_function_coverage=1 00:04:43.892 --rc genhtml_legend=1 00:04:43.892 --rc geninfo_all_blocks=1 00:04:43.892 --rc geninfo_unexecuted_blocks=1 00:04:43.892 00:04:43.892 ' 00:04:43.892 03:31:36 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:43.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.892 --rc genhtml_branch_coverage=1 00:04:43.892 --rc genhtml_function_coverage=1 00:04:43.892 --rc genhtml_legend=1 00:04:43.893 --rc geninfo_all_blocks=1 00:04:43.893 --rc geninfo_unexecuted_blocks=1 00:04:43.893 00:04:43.893 ' 00:04:43.893 03:31:36 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:43.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.893 --rc genhtml_branch_coverage=1 00:04:43.893 --rc genhtml_function_coverage=1 00:04:43.893 --rc genhtml_legend=1 00:04:43.893 --rc geninfo_all_blocks=1 00:04:43.893 --rc geninfo_unexecuted_blocks=1 00:04:43.893 00:04:43.893 ' 00:04:43.893 03:31:36 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:43.893 03:31:36 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:43.893 03:31:36 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:43.893 03:31:36 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:43.893 03:31:36 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:43.893 03:31:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.893 ************************************ 00:04:43.893 START TEST skip_rpc 00:04:43.893 ************************************ 00:04:43.893 03:31:36 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:43.893 03:31:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57733 00:04:43.893 03:31:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:43.893 03:31:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:43.893 03:31:36 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:43.893 [2024-10-01 03:31:36.322102] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:43.893 [2024-10-01 03:31:36.322309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57733 ] 00:04:44.153 [2024-10-01 03:31:36.473613] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.153 [2024-10-01 03:31:36.652709] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57733 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57733 ']' 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57733 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57733 00:04:49.451 killing process with pid 57733 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57733' 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57733 00:04:49.451 03:31:41 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57733 00:04:50.019 00:04:50.020 real 0m6.298s 00:04:50.020 user 0m5.915s 00:04:50.020 sys 0m0.285s 00:04:50.020 03:31:42 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.020 ************************************ 00:04:50.020 END TEST skip_rpc 00:04:50.020 ************************************ 00:04:50.020 03:31:42 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.280 03:31:42 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:50.280 03:31:42 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:50.280 03:31:42 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:50.280 03:31:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.280 ************************************ 00:04:50.280 START TEST skip_rpc_with_json 00:04:50.280 ************************************ 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57832 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57832 00:04:50.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57832 ']' 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:50.280 03:31:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:50.280 [2024-10-01 03:31:42.713211] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:50.280 [2024-10-01 03:31:42.713378] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57832 ] 00:04:50.539 [2024-10-01 03:31:42.876211] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.539 [2024-10-01 03:31:43.055198] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.108 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.108 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:51.108 03:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:51.108 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.108 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.108 [2024-10-01 03:31:43.644983] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:51.108 request: 00:04:51.108 { 00:04:51.108 "trtype": "tcp", 00:04:51.108 "method": "nvmf_get_transports", 00:04:51.108 "req_id": 1 00:04:51.108 } 00:04:51.108 Got JSON-RPC error response 00:04:51.108 response: 00:04:51.108 { 00:04:51.108 "code": -19, 00:04:51.108 "message": "No such device" 00:04:51.108 } 00:04:51.108 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:51.108 03:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:51.108 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.108 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.367 [2024-10-01 03:31:43.657099] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:51.367 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.367 03:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:51.367 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:51.367 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:51.367 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:51.367 03:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:51.367 { 00:04:51.367 "subsystems": [ 00:04:51.367 { 00:04:51.367 "subsystem": "fsdev", 00:04:51.367 "config": [ 00:04:51.367 { 00:04:51.367 "method": "fsdev_set_opts", 00:04:51.367 "params": { 00:04:51.367 "fsdev_io_pool_size": 65535, 00:04:51.367 "fsdev_io_cache_size": 256 00:04:51.368 } 00:04:51.368 } 00:04:51.368 ] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "keyring", 00:04:51.368 "config": [] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "iobuf", 00:04:51.368 "config": [ 00:04:51.368 { 00:04:51.368 "method": "iobuf_set_options", 00:04:51.368 "params": { 00:04:51.368 "small_pool_count": 8192, 00:04:51.368 "large_pool_count": 1024, 00:04:51.368 "small_bufsize": 8192, 00:04:51.368 "large_bufsize": 135168 00:04:51.368 } 00:04:51.368 } 00:04:51.368 ] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "sock", 00:04:51.368 "config": [ 00:04:51.368 { 00:04:51.368 "method": "sock_set_default_impl", 00:04:51.368 "params": { 00:04:51.368 "impl_name": "posix" 00:04:51.368 } 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "method": "sock_impl_set_options", 00:04:51.368 "params": { 00:04:51.368 "impl_name": "ssl", 00:04:51.368 "recv_buf_size": 4096, 00:04:51.368 "send_buf_size": 4096, 00:04:51.368 "enable_recv_pipe": true, 00:04:51.368 "enable_quickack": false, 00:04:51.368 "enable_placement_id": 0, 00:04:51.368 "enable_zerocopy_send_server": true, 00:04:51.368 "enable_zerocopy_send_client": false, 00:04:51.368 "zerocopy_threshold": 0, 00:04:51.368 "tls_version": 0, 00:04:51.368 "enable_ktls": false 00:04:51.368 } 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "method": "sock_impl_set_options", 00:04:51.368 "params": { 00:04:51.368 "impl_name": "posix", 00:04:51.368 "recv_buf_size": 2097152, 00:04:51.368 "send_buf_size": 2097152, 00:04:51.368 "enable_recv_pipe": true, 00:04:51.368 "enable_quickack": false, 00:04:51.368 "enable_placement_id": 0, 00:04:51.368 "enable_zerocopy_send_server": true, 00:04:51.368 "enable_zerocopy_send_client": false, 00:04:51.368 "zerocopy_threshold": 0, 00:04:51.368 "tls_version": 0, 00:04:51.368 "enable_ktls": false 00:04:51.368 } 00:04:51.368 } 00:04:51.368 ] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "vmd", 00:04:51.368 "config": [] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "accel", 00:04:51.368 "config": [ 00:04:51.368 { 00:04:51.368 "method": "accel_set_options", 00:04:51.368 "params": { 00:04:51.368 "small_cache_size": 128, 00:04:51.368 "large_cache_size": 16, 00:04:51.368 "task_count": 2048, 00:04:51.368 "sequence_count": 2048, 00:04:51.368 "buf_count": 2048 00:04:51.368 } 00:04:51.368 } 00:04:51.368 ] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "bdev", 00:04:51.368 "config": [ 00:04:51.368 { 00:04:51.368 "method": "bdev_set_options", 00:04:51.368 "params": { 00:04:51.368 "bdev_io_pool_size": 65535, 00:04:51.368 "bdev_io_cache_size": 256, 00:04:51.368 "bdev_auto_examine": true, 00:04:51.368 "iobuf_small_cache_size": 128, 00:04:51.368 "iobuf_large_cache_size": 16 00:04:51.368 } 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "method": "bdev_raid_set_options", 00:04:51.368 "params": { 00:04:51.368 "process_window_size_kb": 1024, 00:04:51.368 "process_max_bandwidth_mb_sec": 0 00:04:51.368 } 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "method": "bdev_iscsi_set_options", 00:04:51.368 "params": { 00:04:51.368 "timeout_sec": 30 00:04:51.368 } 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "method": "bdev_nvme_set_options", 00:04:51.368 "params": { 00:04:51.368 "action_on_timeout": "none", 00:04:51.368 "timeout_us": 0, 00:04:51.368 "timeout_admin_us": 0, 00:04:51.368 "keep_alive_timeout_ms": 10000, 00:04:51.368 "arbitration_burst": 0, 00:04:51.368 "low_priority_weight": 0, 00:04:51.368 "medium_priority_weight": 0, 00:04:51.368 "high_priority_weight": 0, 00:04:51.368 "nvme_adminq_poll_period_us": 10000, 00:04:51.368 "nvme_ioq_poll_period_us": 0, 00:04:51.368 "io_queue_requests": 0, 00:04:51.368 "delay_cmd_submit": true, 00:04:51.368 "transport_retry_count": 4, 00:04:51.368 "bdev_retry_count": 3, 00:04:51.368 "transport_ack_timeout": 0, 00:04:51.368 "ctrlr_loss_timeout_sec": 0, 00:04:51.368 "reconnect_delay_sec": 0, 00:04:51.368 "fast_io_fail_timeout_sec": 0, 00:04:51.368 "disable_auto_failback": false, 00:04:51.368 "generate_uuids": false, 00:04:51.368 "transport_tos": 0, 00:04:51.368 "nvme_error_stat": false, 00:04:51.368 "rdma_srq_size": 0, 00:04:51.368 "io_path_stat": false, 00:04:51.368 "allow_accel_sequence": false, 00:04:51.368 "rdma_max_cq_size": 0, 00:04:51.368 "rdma_cm_event_timeout_ms": 0, 00:04:51.368 "dhchap_digests": [ 00:04:51.368 "sha256", 00:04:51.368 "sha384", 00:04:51.368 "sha512" 00:04:51.368 ], 00:04:51.368 "dhchap_dhgroups": [ 00:04:51.368 "null", 00:04:51.368 "ffdhe2048", 00:04:51.368 "ffdhe3072", 00:04:51.368 "ffdhe4096", 00:04:51.368 "ffdhe6144", 00:04:51.368 "ffdhe8192" 00:04:51.368 ] 00:04:51.368 } 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "method": "bdev_nvme_set_hotplug", 00:04:51.368 "params": { 00:04:51.368 "period_us": 100000, 00:04:51.368 "enable": false 00:04:51.368 } 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "method": "bdev_wait_for_examine" 00:04:51.368 } 00:04:51.368 ] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "scsi", 00:04:51.368 "config": null 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "scheduler", 00:04:51.368 "config": [ 00:04:51.368 { 00:04:51.368 "method": "framework_set_scheduler", 00:04:51.368 "params": { 00:04:51.368 "name": "static" 00:04:51.368 } 00:04:51.368 } 00:04:51.368 ] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "vhost_scsi", 00:04:51.368 "config": [] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "vhost_blk", 00:04:51.368 "config": [] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "ublk", 00:04:51.368 "config": [] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "nbd", 00:04:51.368 "config": [] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "nvmf", 00:04:51.368 "config": [ 00:04:51.368 { 00:04:51.368 "method": "nvmf_set_config", 00:04:51.368 "params": { 00:04:51.368 "discovery_filter": "match_any", 00:04:51.368 "admin_cmd_passthru": { 00:04:51.368 "identify_ctrlr": false 00:04:51.368 }, 00:04:51.368 "dhchap_digests": [ 00:04:51.368 "sha256", 00:04:51.368 "sha384", 00:04:51.368 "sha512" 00:04:51.368 ], 00:04:51.368 "dhchap_dhgroups": [ 00:04:51.368 "null", 00:04:51.368 "ffdhe2048", 00:04:51.368 "ffdhe3072", 00:04:51.368 "ffdhe4096", 00:04:51.368 "ffdhe6144", 00:04:51.368 "ffdhe8192" 00:04:51.368 ] 00:04:51.368 } 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "method": "nvmf_set_max_subsystems", 00:04:51.368 "params": { 00:04:51.368 "max_subsystems": 1024 00:04:51.368 } 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "method": "nvmf_set_crdt", 00:04:51.368 "params": { 00:04:51.368 "crdt1": 0, 00:04:51.368 "crdt2": 0, 00:04:51.368 "crdt3": 0 00:04:51.368 } 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "method": "nvmf_create_transport", 00:04:51.368 "params": { 00:04:51.368 "trtype": "TCP", 00:04:51.368 "max_queue_depth": 128, 00:04:51.368 "max_io_qpairs_per_ctrlr": 127, 00:04:51.368 "in_capsule_data_size": 4096, 00:04:51.368 "max_io_size": 131072, 00:04:51.368 "io_unit_size": 131072, 00:04:51.368 "max_aq_depth": 128, 00:04:51.368 "num_shared_buffers": 511, 00:04:51.368 "buf_cache_size": 4294967295, 00:04:51.368 "dif_insert_or_strip": false, 00:04:51.368 "zcopy": false, 00:04:51.368 "c2h_success": true, 00:04:51.368 "sock_priority": 0, 00:04:51.368 "abort_timeout_sec": 1, 00:04:51.368 "ack_timeout": 0, 00:04:51.368 "data_wr_pool_size": 0 00:04:51.368 } 00:04:51.368 } 00:04:51.368 ] 00:04:51.368 }, 00:04:51.368 { 00:04:51.368 "subsystem": "iscsi", 00:04:51.368 "config": [ 00:04:51.368 { 00:04:51.368 "method": "iscsi_set_options", 00:04:51.368 "params": { 00:04:51.368 "node_base": "iqn.2016-06.io.spdk", 00:04:51.368 "max_sessions": 128, 00:04:51.368 "max_connections_per_session": 2, 00:04:51.368 "max_queue_depth": 64, 00:04:51.368 "default_time2wait": 2, 00:04:51.368 "default_time2retain": 20, 00:04:51.368 "first_burst_length": 8192, 00:04:51.368 "immediate_data": true, 00:04:51.368 "allow_duplicated_isid": false, 00:04:51.368 "error_recovery_level": 0, 00:04:51.368 "nop_timeout": 60, 00:04:51.368 "nop_in_interval": 30, 00:04:51.368 "disable_chap": false, 00:04:51.368 "require_chap": false, 00:04:51.368 "mutual_chap": false, 00:04:51.368 "chap_group": 0, 00:04:51.368 "max_large_datain_per_connection": 64, 00:04:51.368 "max_r2t_per_connection": 4, 00:04:51.368 "pdu_pool_size": 36864, 00:04:51.368 "immediate_data_pool_size": 16384, 00:04:51.368 "data_out_pool_size": 2048 00:04:51.368 } 00:04:51.368 } 00:04:51.368 ] 00:04:51.369 } 00:04:51.369 ] 00:04:51.369 } 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57832 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57832 ']' 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57832 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57832 00:04:51.369 killing process with pid 57832 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57832' 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57832 00:04:51.369 03:31:43 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57832 00:04:53.273 03:31:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57871 00:04:53.274 03:31:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:53.274 03:31:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:58.535 03:31:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57871 00:04:58.535 03:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57871 ']' 00:04:58.535 03:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57871 00:04:58.535 03:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:58.535 03:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:58.535 03:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57871 00:04:58.535 killing process with pid 57871 00:04:58.535 03:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:58.535 03:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:58.535 03:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57871' 00:04:58.535 03:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57871 00:04:58.535 03:31:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57871 00:04:59.104 03:31:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:59.104 03:31:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:59.104 00:04:59.104 real 0m9.039s 00:04:59.104 user 0m8.691s 00:04:59.104 sys 0m0.624s 00:04:59.104 ************************************ 00:04:59.104 END TEST skip_rpc_with_json 00:04:59.104 03:31:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.104 03:31:51 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:59.104 ************************************ 00:04:59.365 03:31:51 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:59.365 03:31:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.365 03:31:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.365 03:31:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.365 ************************************ 00:04:59.365 START TEST skip_rpc_with_delay 00:04:59.365 ************************************ 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:59.365 [2024-10-01 03:31:51.761872] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:59.365 [2024-10-01 03:31:51.761976] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:59.365 ************************************ 00:04:59.365 END TEST skip_rpc_with_delay 00:04:59.365 ************************************ 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:59.365 00:04:59.365 real 0m0.125s 00:04:59.365 user 0m0.066s 00:04:59.365 sys 0m0.058s 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.365 03:31:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:59.365 03:31:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:59.365 03:31:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:59.365 03:31:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:59.365 03:31:51 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.365 03:31:51 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.365 03:31:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.365 ************************************ 00:04:59.365 START TEST exit_on_failed_rpc_init 00:04:59.365 ************************************ 00:04:59.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:59.365 03:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:59.365 03:31:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:59.365 03:31:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57994 00:04:59.365 03:31:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57994 00:04:59.365 03:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57994 ']' 00:04:59.365 03:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:59.365 03:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:59.365 03:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:59.365 03:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:59.365 03:31:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:59.626 [2024-10-01 03:31:51.980127] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:04:59.626 [2024-10-01 03:31:51.980512] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57994 ] 00:04:59.626 [2024-10-01 03:31:52.152355] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:59.886 [2024-10-01 03:31:52.389284] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:00.829 03:31:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:00.829 [2024-10-01 03:31:53.185553] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:00.829 [2024-10-01 03:31:53.185898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58012 ] 00:05:00.829 [2024-10-01 03:31:53.339271] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:01.089 [2024-10-01 03:31:53.628098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.089 [2024-10-01 03:31:53.628243] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:01.089 [2024-10-01 03:31:53.628259] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:01.089 [2024-10-01 03:31:53.628275] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57994 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57994 ']' 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57994 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57994 00:05:01.665 killing process with pid 57994 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57994' 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57994 00:05:01.665 03:31:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57994 00:05:03.111 00:05:03.111 real 0m3.662s 00:05:03.111 user 0m4.152s 00:05:03.111 sys 0m0.623s 00:05:03.111 03:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.111 ************************************ 00:05:03.111 END TEST exit_on_failed_rpc_init 00:05:03.111 ************************************ 00:05:03.111 03:31:55 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:03.111 03:31:55 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:03.111 ************************************ 00:05:03.111 END TEST skip_rpc 00:05:03.111 ************************************ 00:05:03.111 00:05:03.111 real 0m19.495s 00:05:03.111 user 0m18.955s 00:05:03.111 sys 0m1.788s 00:05:03.111 03:31:55 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.111 03:31:55 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:03.111 03:31:55 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:03.111 03:31:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.111 03:31:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.111 03:31:55 -- common/autotest_common.sh@10 -- # set +x 00:05:03.111 ************************************ 00:05:03.111 START TEST rpc_client 00:05:03.111 ************************************ 00:05:03.111 03:31:55 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:03.371 * Looking for test storage... 00:05:03.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:03.371 03:31:55 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.371 03:31:55 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.371 03:31:55 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:03.371 03:31:55 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.371 03:31:55 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:03.372 03:31:55 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.372 03:31:55 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:03.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.372 --rc genhtml_branch_coverage=1 00:05:03.372 --rc genhtml_function_coverage=1 00:05:03.372 --rc genhtml_legend=1 00:05:03.372 --rc geninfo_all_blocks=1 00:05:03.372 --rc geninfo_unexecuted_blocks=1 00:05:03.372 00:05:03.372 ' 00:05:03.372 03:31:55 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:03.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.372 --rc genhtml_branch_coverage=1 00:05:03.372 --rc genhtml_function_coverage=1 00:05:03.372 --rc genhtml_legend=1 00:05:03.372 --rc geninfo_all_blocks=1 00:05:03.372 --rc geninfo_unexecuted_blocks=1 00:05:03.372 00:05:03.372 ' 00:05:03.372 03:31:55 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:03.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.372 --rc genhtml_branch_coverage=1 00:05:03.372 --rc genhtml_function_coverage=1 00:05:03.372 --rc genhtml_legend=1 00:05:03.372 --rc geninfo_all_blocks=1 00:05:03.372 --rc geninfo_unexecuted_blocks=1 00:05:03.372 00:05:03.372 ' 00:05:03.372 03:31:55 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:03.372 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.372 --rc genhtml_branch_coverage=1 00:05:03.372 --rc genhtml_function_coverage=1 00:05:03.372 --rc genhtml_legend=1 00:05:03.372 --rc geninfo_all_blocks=1 00:05:03.372 --rc geninfo_unexecuted_blocks=1 00:05:03.372 00:05:03.372 ' 00:05:03.372 03:31:55 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:03.372 OK 00:05:03.372 03:31:55 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:03.372 ************************************ 00:05:03.372 END TEST rpc_client 00:05:03.372 ************************************ 00:05:03.372 00:05:03.372 real 0m0.231s 00:05:03.372 user 0m0.137s 00:05:03.372 sys 0m0.093s 00:05:03.372 03:31:55 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.372 03:31:55 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:03.372 03:31:55 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:03.372 03:31:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.372 03:31:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.372 03:31:55 -- common/autotest_common.sh@10 -- # set +x 00:05:03.372 ************************************ 00:05:03.372 START TEST json_config 00:05:03.372 ************************************ 00:05:03.372 03:31:55 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:03.633 03:31:55 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.633 03:31:55 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.633 03:31:55 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:03.633 03:31:56 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:03.633 03:31:56 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.633 03:31:56 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.633 03:31:56 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.633 03:31:56 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.633 03:31:56 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.633 03:31:56 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.633 03:31:56 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.633 03:31:56 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.633 03:31:56 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.633 03:31:56 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.633 03:31:56 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.633 03:31:56 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:03.633 03:31:56 json_config -- scripts/common.sh@345 -- # : 1 00:05:03.633 03:31:56 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.633 03:31:56 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.633 03:31:56 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:03.633 03:31:56 json_config -- scripts/common.sh@353 -- # local d=1 00:05:03.633 03:31:56 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.633 03:31:56 json_config -- scripts/common.sh@355 -- # echo 1 00:05:03.633 03:31:56 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.633 03:31:56 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:03.633 03:31:56 json_config -- scripts/common.sh@353 -- # local d=2 00:05:03.633 03:31:56 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.633 03:31:56 json_config -- scripts/common.sh@355 -- # echo 2 00:05:03.633 03:31:56 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.633 03:31:56 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.633 03:31:56 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.633 03:31:56 json_config -- scripts/common.sh@368 -- # return 0 00:05:03.633 03:31:56 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.633 03:31:56 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.633 --rc genhtml_branch_coverage=1 00:05:03.633 --rc genhtml_function_coverage=1 00:05:03.633 --rc genhtml_legend=1 00:05:03.633 --rc geninfo_all_blocks=1 00:05:03.633 --rc geninfo_unexecuted_blocks=1 00:05:03.633 00:05:03.633 ' 00:05:03.633 03:31:56 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.633 --rc genhtml_branch_coverage=1 00:05:03.633 --rc genhtml_function_coverage=1 00:05:03.633 --rc genhtml_legend=1 00:05:03.633 --rc geninfo_all_blocks=1 00:05:03.633 --rc geninfo_unexecuted_blocks=1 00:05:03.633 00:05:03.633 ' 00:05:03.633 03:31:56 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.633 --rc genhtml_branch_coverage=1 00:05:03.633 --rc genhtml_function_coverage=1 00:05:03.633 --rc genhtml_legend=1 00:05:03.633 --rc geninfo_all_blocks=1 00:05:03.633 --rc geninfo_unexecuted_blocks=1 00:05:03.633 00:05:03.633 ' 00:05:03.633 03:31:56 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:03.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.633 --rc genhtml_branch_coverage=1 00:05:03.633 --rc genhtml_function_coverage=1 00:05:03.633 --rc genhtml_legend=1 00:05:03.633 --rc geninfo_all_blocks=1 00:05:03.633 --rc geninfo_unexecuted_blocks=1 00:05:03.633 00:05:03.633 ' 00:05:03.633 03:31:56 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ab7b33bf-904a-48ad-bfbc-0fc5fd07eef8 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ab7b33bf-904a-48ad-bfbc-0fc5fd07eef8 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.633 03:31:56 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:03.634 03:31:56 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.634 03:31:56 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.634 03:31:56 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.634 03:31:56 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.634 03:31:56 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.634 03:31:56 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.634 03:31:56 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.634 03:31:56 json_config -- paths/export.sh@5 -- # export PATH 00:05:03.634 03:31:56 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@51 -- # : 0 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.634 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.634 03:31:56 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.634 03:31:56 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:03.634 03:31:56 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:03.634 03:31:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:03.634 03:31:56 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:03.634 03:31:56 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:03.634 WARNING: No tests are enabled so not running JSON configuration tests 00:05:03.634 03:31:56 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:03.634 03:31:56 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:03.634 00:05:03.634 real 0m0.146s 00:05:03.634 user 0m0.089s 00:05:03.634 sys 0m0.054s 00:05:03.634 03:31:56 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:03.634 03:31:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:03.634 ************************************ 00:05:03.634 END TEST json_config 00:05:03.634 ************************************ 00:05:03.634 03:31:56 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:03.634 03:31:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:03.634 03:31:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:03.634 03:31:56 -- common/autotest_common.sh@10 -- # set +x 00:05:03.634 ************************************ 00:05:03.634 START TEST json_config_extra_key 00:05:03.634 ************************************ 00:05:03.634 03:31:56 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:03.634 03:31:56 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:03.634 03:31:56 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:03.634 03:31:56 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:03.895 03:31:56 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:03.895 03:31:56 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:03.895 03:31:56 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:03.895 03:31:56 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:03.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.895 --rc genhtml_branch_coverage=1 00:05:03.895 --rc genhtml_function_coverage=1 00:05:03.895 --rc genhtml_legend=1 00:05:03.895 --rc geninfo_all_blocks=1 00:05:03.895 --rc geninfo_unexecuted_blocks=1 00:05:03.895 00:05:03.895 ' 00:05:03.895 03:31:56 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:03.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.895 --rc genhtml_branch_coverage=1 00:05:03.895 --rc genhtml_function_coverage=1 00:05:03.895 --rc genhtml_legend=1 00:05:03.895 --rc geninfo_all_blocks=1 00:05:03.895 --rc geninfo_unexecuted_blocks=1 00:05:03.895 00:05:03.895 ' 00:05:03.895 03:31:56 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:03.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.895 --rc genhtml_branch_coverage=1 00:05:03.895 --rc genhtml_function_coverage=1 00:05:03.895 --rc genhtml_legend=1 00:05:03.895 --rc geninfo_all_blocks=1 00:05:03.895 --rc geninfo_unexecuted_blocks=1 00:05:03.895 00:05:03.895 ' 00:05:03.895 03:31:56 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:03.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:03.895 --rc genhtml_branch_coverage=1 00:05:03.895 --rc genhtml_function_coverage=1 00:05:03.895 --rc genhtml_legend=1 00:05:03.895 --rc geninfo_all_blocks=1 00:05:03.895 --rc geninfo_unexecuted_blocks=1 00:05:03.895 00:05:03.895 ' 00:05:03.895 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:03.895 03:31:56 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:03.895 03:31:56 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:03.895 03:31:56 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:03.895 03:31:56 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ab7b33bf-904a-48ad-bfbc-0fc5fd07eef8 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ab7b33bf-904a-48ad-bfbc-0fc5fd07eef8 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:03.896 03:31:56 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:03.896 03:31:56 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:03.896 03:31:56 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:03.896 03:31:56 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:03.896 03:31:56 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.896 03:31:56 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.896 03:31:56 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.896 03:31:56 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:03.896 03:31:56 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:03.896 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:03.896 03:31:56 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:03.896 INFO: launching applications... 00:05:03.896 03:31:56 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:03.896 03:31:56 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:03.896 03:31:56 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:03.896 03:31:56 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:03.896 03:31:56 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:03.896 03:31:56 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:03.896 03:31:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.896 03:31:56 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:03.896 03:31:56 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58216 00:05:03.896 03:31:56 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:03.896 Waiting for target to run... 00:05:03.896 03:31:56 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58216 /var/tmp/spdk_tgt.sock 00:05:03.896 03:31:56 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 58216 ']' 00:05:03.896 03:31:56 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:03.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:03.896 03:31:56 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:03.896 03:31:56 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:03.896 03:31:56 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:03.896 03:31:56 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:03.896 03:31:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:03.896 [2024-10-01 03:31:56.342565] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:03.896 [2024-10-01 03:31:56.342818] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58216 ] 00:05:04.158 [2024-10-01 03:31:56.652617] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:04.418 [2024-10-01 03:31:56.830126] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.989 03:31:57 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:04.989 00:05:04.989 INFO: shutting down applications... 00:05:04.989 03:31:57 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:04.989 03:31:57 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:04.989 03:31:57 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:04.989 03:31:57 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:04.989 03:31:57 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:04.989 03:31:57 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.989 03:31:57 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58216 ]] 00:05:04.989 03:31:57 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58216 00:05:04.989 03:31:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.989 03:31:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.989 03:31:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58216 00:05:04.989 03:31:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.562 03:31:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.562 03:31:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.562 03:31:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58216 00:05:05.562 03:31:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.823 03:31:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.823 03:31:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.823 03:31:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58216 00:05:05.823 03:31:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.395 03:31:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.395 03:31:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.395 03:31:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58216 00:05:06.395 03:31:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:06.964 03:31:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:06.964 03:31:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:06.964 03:31:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58216 00:05:06.964 03:31:59 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:06.964 03:31:59 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:06.964 03:31:59 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:06.964 SPDK target shutdown done 00:05:06.964 03:31:59 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:06.964 Success 00:05:06.964 03:31:59 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:06.964 ************************************ 00:05:06.964 END TEST json_config_extra_key 00:05:06.964 ************************************ 00:05:06.964 00:05:06.964 real 0m3.233s 00:05:06.964 user 0m2.882s 00:05:06.964 sys 0m0.395s 00:05:06.964 03:31:59 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:06.965 03:31:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:06.965 03:31:59 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.965 03:31:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:06.965 03:31:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:06.965 03:31:59 -- common/autotest_common.sh@10 -- # set +x 00:05:06.965 ************************************ 00:05:06.965 START TEST alias_rpc 00:05:06.965 ************************************ 00:05:06.965 03:31:59 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:06.965 * Looking for test storage... 00:05:06.965 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:06.965 03:31:59 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:06.965 03:31:59 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:06.965 03:31:59 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.226 03:31:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:07.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.226 --rc genhtml_branch_coverage=1 00:05:07.226 --rc genhtml_function_coverage=1 00:05:07.226 --rc genhtml_legend=1 00:05:07.226 --rc geninfo_all_blocks=1 00:05:07.226 --rc geninfo_unexecuted_blocks=1 00:05:07.226 00:05:07.226 ' 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:07.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.226 --rc genhtml_branch_coverage=1 00:05:07.226 --rc genhtml_function_coverage=1 00:05:07.226 --rc genhtml_legend=1 00:05:07.226 --rc geninfo_all_blocks=1 00:05:07.226 --rc geninfo_unexecuted_blocks=1 00:05:07.226 00:05:07.226 ' 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:07.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.226 --rc genhtml_branch_coverage=1 00:05:07.226 --rc genhtml_function_coverage=1 00:05:07.226 --rc genhtml_legend=1 00:05:07.226 --rc geninfo_all_blocks=1 00:05:07.226 --rc geninfo_unexecuted_blocks=1 00:05:07.226 00:05:07.226 ' 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:07.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.226 --rc genhtml_branch_coverage=1 00:05:07.226 --rc genhtml_function_coverage=1 00:05:07.226 --rc genhtml_legend=1 00:05:07.226 --rc geninfo_all_blocks=1 00:05:07.226 --rc geninfo_unexecuted_blocks=1 00:05:07.226 00:05:07.226 ' 00:05:07.226 03:31:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:07.226 03:31:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58309 00:05:07.226 03:31:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58309 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 58309 ']' 00:05:07.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:07.226 03:31:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.226 03:31:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:07.226 [2024-10-01 03:31:59.627465] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:07.226 [2024-10-01 03:31:59.627591] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58309 ] 00:05:07.487 [2024-10-01 03:31:59.778203] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.487 [2024-10-01 03:31:59.958098] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.059 03:32:00 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:08.059 03:32:00 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:08.059 03:32:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:08.321 03:32:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58309 00:05:08.321 03:32:00 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 58309 ']' 00:05:08.321 03:32:00 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 58309 00:05:08.321 03:32:00 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:08.321 03:32:00 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.321 03:32:00 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58309 00:05:08.321 03:32:00 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.321 killing process with pid 58309 00:05:08.321 03:32:00 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.321 03:32:00 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58309' 00:05:08.321 03:32:00 alias_rpc -- common/autotest_common.sh@969 -- # kill 58309 00:05:08.321 03:32:00 alias_rpc -- common/autotest_common.sh@974 -- # wait 58309 00:05:09.706 00:05:09.706 real 0m2.837s 00:05:09.706 user 0m2.902s 00:05:09.706 sys 0m0.437s 00:05:09.706 03:32:02 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.706 03:32:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.706 ************************************ 00:05:09.706 END TEST alias_rpc 00:05:09.706 ************************************ 00:05:09.966 03:32:02 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:09.966 03:32:02 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:09.966 03:32:02 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.966 03:32:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.966 03:32:02 -- common/autotest_common.sh@10 -- # set +x 00:05:09.966 ************************************ 00:05:09.966 START TEST spdkcli_tcp 00:05:09.966 ************************************ 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:09.966 * Looking for test storage... 00:05:09.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.966 03:32:02 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:09.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.966 --rc genhtml_branch_coverage=1 00:05:09.966 --rc genhtml_function_coverage=1 00:05:09.966 --rc genhtml_legend=1 00:05:09.966 --rc geninfo_all_blocks=1 00:05:09.966 --rc geninfo_unexecuted_blocks=1 00:05:09.966 00:05:09.966 ' 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:09.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.966 --rc genhtml_branch_coverage=1 00:05:09.966 --rc genhtml_function_coverage=1 00:05:09.966 --rc genhtml_legend=1 00:05:09.966 --rc geninfo_all_blocks=1 00:05:09.966 --rc geninfo_unexecuted_blocks=1 00:05:09.966 00:05:09.966 ' 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:09.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.966 --rc genhtml_branch_coverage=1 00:05:09.966 --rc genhtml_function_coverage=1 00:05:09.966 --rc genhtml_legend=1 00:05:09.966 --rc geninfo_all_blocks=1 00:05:09.966 --rc geninfo_unexecuted_blocks=1 00:05:09.966 00:05:09.966 ' 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:09.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.966 --rc genhtml_branch_coverage=1 00:05:09.966 --rc genhtml_function_coverage=1 00:05:09.966 --rc genhtml_legend=1 00:05:09.966 --rc geninfo_all_blocks=1 00:05:09.966 --rc geninfo_unexecuted_blocks=1 00:05:09.966 00:05:09.966 ' 00:05:09.966 03:32:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:09.966 03:32:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:09.966 03:32:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:09.966 03:32:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:09.966 03:32:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:09.966 03:32:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:09.966 03:32:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.966 03:32:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58401 00:05:09.966 03:32:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58401 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 58401 ']' 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.966 03:32:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:09.966 03:32:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:10.224 [2024-10-01 03:32:02.536040] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:10.224 [2024-10-01 03:32:02.536153] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58401 ] 00:05:10.224 [2024-10-01 03:32:02.681930] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.497 [2024-10-01 03:32:02.827466] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.497 [2024-10-01 03:32:02.827526] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.063 03:32:03 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.063 03:32:03 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:11.063 03:32:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58417 00:05:11.063 03:32:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:11.063 03:32:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:11.063 [ 00:05:11.063 "bdev_malloc_delete", 00:05:11.063 "bdev_malloc_create", 00:05:11.063 "bdev_null_resize", 00:05:11.063 "bdev_null_delete", 00:05:11.063 "bdev_null_create", 00:05:11.063 "bdev_nvme_cuse_unregister", 00:05:11.063 "bdev_nvme_cuse_register", 00:05:11.063 "bdev_opal_new_user", 00:05:11.063 "bdev_opal_set_lock_state", 00:05:11.063 "bdev_opal_delete", 00:05:11.063 "bdev_opal_get_info", 00:05:11.063 "bdev_opal_create", 00:05:11.063 "bdev_nvme_opal_revert", 00:05:11.063 "bdev_nvme_opal_init", 00:05:11.063 "bdev_nvme_send_cmd", 00:05:11.063 "bdev_nvme_set_keys", 00:05:11.063 "bdev_nvme_get_path_iostat", 00:05:11.063 "bdev_nvme_get_mdns_discovery_info", 00:05:11.063 "bdev_nvme_stop_mdns_discovery", 00:05:11.063 "bdev_nvme_start_mdns_discovery", 00:05:11.063 "bdev_nvme_set_multipath_policy", 00:05:11.063 "bdev_nvme_set_preferred_path", 00:05:11.064 "bdev_nvme_get_io_paths", 00:05:11.064 "bdev_nvme_remove_error_injection", 00:05:11.064 "bdev_nvme_add_error_injection", 00:05:11.064 "bdev_nvme_get_discovery_info", 00:05:11.064 "bdev_nvme_stop_discovery", 00:05:11.064 "bdev_nvme_start_discovery", 00:05:11.064 "bdev_nvme_get_controller_health_info", 00:05:11.064 "bdev_nvme_disable_controller", 00:05:11.064 "bdev_nvme_enable_controller", 00:05:11.064 "bdev_nvme_reset_controller", 00:05:11.064 "bdev_nvme_get_transport_statistics", 00:05:11.064 "bdev_nvme_apply_firmware", 00:05:11.064 "bdev_nvme_detach_controller", 00:05:11.064 "bdev_nvme_get_controllers", 00:05:11.064 "bdev_nvme_attach_controller", 00:05:11.064 "bdev_nvme_set_hotplug", 00:05:11.064 "bdev_nvme_set_options", 00:05:11.064 "bdev_passthru_delete", 00:05:11.064 "bdev_passthru_create", 00:05:11.064 "bdev_lvol_set_parent_bdev", 00:05:11.064 "bdev_lvol_set_parent", 00:05:11.064 "bdev_lvol_check_shallow_copy", 00:05:11.064 "bdev_lvol_start_shallow_copy", 00:05:11.064 "bdev_lvol_grow_lvstore", 00:05:11.064 "bdev_lvol_get_lvols", 00:05:11.064 "bdev_lvol_get_lvstores", 00:05:11.064 "bdev_lvol_delete", 00:05:11.064 "bdev_lvol_set_read_only", 00:05:11.064 "bdev_lvol_resize", 00:05:11.064 "bdev_lvol_decouple_parent", 00:05:11.064 "bdev_lvol_inflate", 00:05:11.064 "bdev_lvol_rename", 00:05:11.064 "bdev_lvol_clone_bdev", 00:05:11.064 "bdev_lvol_clone", 00:05:11.064 "bdev_lvol_snapshot", 00:05:11.064 "bdev_lvol_create", 00:05:11.064 "bdev_lvol_delete_lvstore", 00:05:11.064 "bdev_lvol_rename_lvstore", 00:05:11.064 "bdev_lvol_create_lvstore", 00:05:11.064 "bdev_raid_set_options", 00:05:11.064 "bdev_raid_remove_base_bdev", 00:05:11.064 "bdev_raid_add_base_bdev", 00:05:11.064 "bdev_raid_delete", 00:05:11.064 "bdev_raid_create", 00:05:11.064 "bdev_raid_get_bdevs", 00:05:11.064 "bdev_error_inject_error", 00:05:11.064 "bdev_error_delete", 00:05:11.064 "bdev_error_create", 00:05:11.064 "bdev_split_delete", 00:05:11.064 "bdev_split_create", 00:05:11.064 "bdev_delay_delete", 00:05:11.064 "bdev_delay_create", 00:05:11.064 "bdev_delay_update_latency", 00:05:11.064 "bdev_zone_block_delete", 00:05:11.064 "bdev_zone_block_create", 00:05:11.064 "blobfs_create", 00:05:11.064 "blobfs_detect", 00:05:11.064 "blobfs_set_cache_size", 00:05:11.064 "bdev_xnvme_delete", 00:05:11.064 "bdev_xnvme_create", 00:05:11.064 "bdev_aio_delete", 00:05:11.064 "bdev_aio_rescan", 00:05:11.064 "bdev_aio_create", 00:05:11.064 "bdev_ftl_set_property", 00:05:11.064 "bdev_ftl_get_properties", 00:05:11.064 "bdev_ftl_get_stats", 00:05:11.064 "bdev_ftl_unmap", 00:05:11.064 "bdev_ftl_unload", 00:05:11.064 "bdev_ftl_delete", 00:05:11.064 "bdev_ftl_load", 00:05:11.064 "bdev_ftl_create", 00:05:11.064 "bdev_virtio_attach_controller", 00:05:11.064 "bdev_virtio_scsi_get_devices", 00:05:11.064 "bdev_virtio_detach_controller", 00:05:11.064 "bdev_virtio_blk_set_hotplug", 00:05:11.064 "bdev_iscsi_delete", 00:05:11.064 "bdev_iscsi_create", 00:05:11.064 "bdev_iscsi_set_options", 00:05:11.064 "accel_error_inject_error", 00:05:11.064 "ioat_scan_accel_module", 00:05:11.064 "dsa_scan_accel_module", 00:05:11.064 "iaa_scan_accel_module", 00:05:11.064 "keyring_file_remove_key", 00:05:11.064 "keyring_file_add_key", 00:05:11.064 "keyring_linux_set_options", 00:05:11.064 "fsdev_aio_delete", 00:05:11.064 "fsdev_aio_create", 00:05:11.064 "iscsi_get_histogram", 00:05:11.064 "iscsi_enable_histogram", 00:05:11.064 "iscsi_set_options", 00:05:11.064 "iscsi_get_auth_groups", 00:05:11.064 "iscsi_auth_group_remove_secret", 00:05:11.064 "iscsi_auth_group_add_secret", 00:05:11.064 "iscsi_delete_auth_group", 00:05:11.064 "iscsi_create_auth_group", 00:05:11.064 "iscsi_set_discovery_auth", 00:05:11.064 "iscsi_get_options", 00:05:11.064 "iscsi_target_node_request_logout", 00:05:11.064 "iscsi_target_node_set_redirect", 00:05:11.064 "iscsi_target_node_set_auth", 00:05:11.064 "iscsi_target_node_add_lun", 00:05:11.064 "iscsi_get_stats", 00:05:11.064 "iscsi_get_connections", 00:05:11.064 "iscsi_portal_group_set_auth", 00:05:11.064 "iscsi_start_portal_group", 00:05:11.064 "iscsi_delete_portal_group", 00:05:11.064 "iscsi_create_portal_group", 00:05:11.064 "iscsi_get_portal_groups", 00:05:11.064 "iscsi_delete_target_node", 00:05:11.064 "iscsi_target_node_remove_pg_ig_maps", 00:05:11.064 "iscsi_target_node_add_pg_ig_maps", 00:05:11.064 "iscsi_create_target_node", 00:05:11.064 "iscsi_get_target_nodes", 00:05:11.064 "iscsi_delete_initiator_group", 00:05:11.064 "iscsi_initiator_group_remove_initiators", 00:05:11.064 "iscsi_initiator_group_add_initiators", 00:05:11.064 "iscsi_create_initiator_group", 00:05:11.064 "iscsi_get_initiator_groups", 00:05:11.064 "nvmf_set_crdt", 00:05:11.064 "nvmf_set_config", 00:05:11.064 "nvmf_set_max_subsystems", 00:05:11.064 "nvmf_stop_mdns_prr", 00:05:11.064 "nvmf_publish_mdns_prr", 00:05:11.064 "nvmf_subsystem_get_listeners", 00:05:11.064 "nvmf_subsystem_get_qpairs", 00:05:11.064 "nvmf_subsystem_get_controllers", 00:05:11.064 "nvmf_get_stats", 00:05:11.064 "nvmf_get_transports", 00:05:11.064 "nvmf_create_transport", 00:05:11.064 "nvmf_get_targets", 00:05:11.064 "nvmf_delete_target", 00:05:11.064 "nvmf_create_target", 00:05:11.064 "nvmf_subsystem_allow_any_host", 00:05:11.064 "nvmf_subsystem_set_keys", 00:05:11.064 "nvmf_subsystem_remove_host", 00:05:11.064 "nvmf_subsystem_add_host", 00:05:11.064 "nvmf_ns_remove_host", 00:05:11.064 "nvmf_ns_add_host", 00:05:11.064 "nvmf_subsystem_remove_ns", 00:05:11.064 "nvmf_subsystem_set_ns_ana_group", 00:05:11.064 "nvmf_subsystem_add_ns", 00:05:11.064 "nvmf_subsystem_listener_set_ana_state", 00:05:11.064 "nvmf_discovery_get_referrals", 00:05:11.064 "nvmf_discovery_remove_referral", 00:05:11.064 "nvmf_discovery_add_referral", 00:05:11.064 "nvmf_subsystem_remove_listener", 00:05:11.064 "nvmf_subsystem_add_listener", 00:05:11.064 "nvmf_delete_subsystem", 00:05:11.064 "nvmf_create_subsystem", 00:05:11.064 "nvmf_get_subsystems", 00:05:11.064 "env_dpdk_get_mem_stats", 00:05:11.064 "nbd_get_disks", 00:05:11.064 "nbd_stop_disk", 00:05:11.064 "nbd_start_disk", 00:05:11.064 "ublk_recover_disk", 00:05:11.064 "ublk_get_disks", 00:05:11.064 "ublk_stop_disk", 00:05:11.064 "ublk_start_disk", 00:05:11.064 "ublk_destroy_target", 00:05:11.064 "ublk_create_target", 00:05:11.064 "virtio_blk_create_transport", 00:05:11.064 "virtio_blk_get_transports", 00:05:11.064 "vhost_controller_set_coalescing", 00:05:11.064 "vhost_get_controllers", 00:05:11.064 "vhost_delete_controller", 00:05:11.064 "vhost_create_blk_controller", 00:05:11.064 "vhost_scsi_controller_remove_target", 00:05:11.064 "vhost_scsi_controller_add_target", 00:05:11.064 "vhost_start_scsi_controller", 00:05:11.064 "vhost_create_scsi_controller", 00:05:11.064 "thread_set_cpumask", 00:05:11.064 "scheduler_set_options", 00:05:11.064 "framework_get_governor", 00:05:11.064 "framework_get_scheduler", 00:05:11.064 "framework_set_scheduler", 00:05:11.064 "framework_get_reactors", 00:05:11.064 "thread_get_io_channels", 00:05:11.064 "thread_get_pollers", 00:05:11.064 "thread_get_stats", 00:05:11.064 "framework_monitor_context_switch", 00:05:11.064 "spdk_kill_instance", 00:05:11.064 "log_enable_timestamps", 00:05:11.064 "log_get_flags", 00:05:11.064 "log_clear_flag", 00:05:11.064 "log_set_flag", 00:05:11.064 "log_get_level", 00:05:11.064 "log_set_level", 00:05:11.064 "log_get_print_level", 00:05:11.064 "log_set_print_level", 00:05:11.064 "framework_enable_cpumask_locks", 00:05:11.064 "framework_disable_cpumask_locks", 00:05:11.064 "framework_wait_init", 00:05:11.064 "framework_start_init", 00:05:11.064 "scsi_get_devices", 00:05:11.064 "bdev_get_histogram", 00:05:11.064 "bdev_enable_histogram", 00:05:11.064 "bdev_set_qos_limit", 00:05:11.064 "bdev_set_qd_sampling_period", 00:05:11.064 "bdev_get_bdevs", 00:05:11.064 "bdev_reset_iostat", 00:05:11.064 "bdev_get_iostat", 00:05:11.064 "bdev_examine", 00:05:11.064 "bdev_wait_for_examine", 00:05:11.064 "bdev_set_options", 00:05:11.064 "accel_get_stats", 00:05:11.064 "accel_set_options", 00:05:11.064 "accel_set_driver", 00:05:11.064 "accel_crypto_key_destroy", 00:05:11.064 "accel_crypto_keys_get", 00:05:11.064 "accel_crypto_key_create", 00:05:11.064 "accel_assign_opc", 00:05:11.064 "accel_get_module_info", 00:05:11.064 "accel_get_opc_assignments", 00:05:11.064 "vmd_rescan", 00:05:11.064 "vmd_remove_device", 00:05:11.064 "vmd_enable", 00:05:11.064 "sock_get_default_impl", 00:05:11.064 "sock_set_default_impl", 00:05:11.064 "sock_impl_set_options", 00:05:11.064 "sock_impl_get_options", 00:05:11.064 "iobuf_get_stats", 00:05:11.064 "iobuf_set_options", 00:05:11.064 "keyring_get_keys", 00:05:11.064 "framework_get_pci_devices", 00:05:11.064 "framework_get_config", 00:05:11.064 "framework_get_subsystems", 00:05:11.064 "fsdev_set_opts", 00:05:11.064 "fsdev_get_opts", 00:05:11.064 "trace_get_info", 00:05:11.064 "trace_get_tpoint_group_mask", 00:05:11.064 "trace_disable_tpoint_group", 00:05:11.064 "trace_enable_tpoint_group", 00:05:11.064 "trace_clear_tpoint_mask", 00:05:11.064 "trace_set_tpoint_mask", 00:05:11.064 "notify_get_notifications", 00:05:11.064 "notify_get_types", 00:05:11.064 "spdk_get_version", 00:05:11.064 "rpc_get_methods" 00:05:11.064 ] 00:05:11.064 03:32:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:11.064 03:32:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:11.064 03:32:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58401 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 58401 ']' 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 58401 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58401 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.064 killing process with pid 58401 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58401' 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 58401 00:05:11.064 03:32:03 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 58401 00:05:12.442 00:05:12.442 real 0m2.501s 00:05:12.442 user 0m4.304s 00:05:12.442 sys 0m0.392s 00:05:12.442 03:32:04 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.442 ************************************ 00:05:12.442 END TEST spdkcli_tcp 00:05:12.442 ************************************ 00:05:12.442 03:32:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:12.442 03:32:04 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.442 03:32:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.442 03:32:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.442 03:32:04 -- common/autotest_common.sh@10 -- # set +x 00:05:12.442 ************************************ 00:05:12.442 START TEST dpdk_mem_utility 00:05:12.442 ************************************ 00:05:12.442 03:32:04 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:12.442 * Looking for test storage... 00:05:12.442 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:12.442 03:32:04 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:12.442 03:32:04 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:12.442 03:32:04 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:12.704 03:32:04 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:12.704 03:32:04 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.704 03:32:05 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:12.704 03:32:05 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:12.704 03:32:05 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.704 03:32:05 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:12.704 03:32:05 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.704 03:32:05 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.704 03:32:05 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.704 03:32:05 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:12.704 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.704 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:12.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.704 --rc genhtml_branch_coverage=1 00:05:12.704 --rc genhtml_function_coverage=1 00:05:12.704 --rc genhtml_legend=1 00:05:12.704 --rc geninfo_all_blocks=1 00:05:12.704 --rc geninfo_unexecuted_blocks=1 00:05:12.704 00:05:12.704 ' 00:05:12.704 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:12.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.704 --rc genhtml_branch_coverage=1 00:05:12.704 --rc genhtml_function_coverage=1 00:05:12.704 --rc genhtml_legend=1 00:05:12.704 --rc geninfo_all_blocks=1 00:05:12.704 --rc geninfo_unexecuted_blocks=1 00:05:12.704 00:05:12.704 ' 00:05:12.704 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:12.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.704 --rc genhtml_branch_coverage=1 00:05:12.704 --rc genhtml_function_coverage=1 00:05:12.704 --rc genhtml_legend=1 00:05:12.704 --rc geninfo_all_blocks=1 00:05:12.704 --rc geninfo_unexecuted_blocks=1 00:05:12.704 00:05:12.704 ' 00:05:12.704 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:12.704 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.704 --rc genhtml_branch_coverage=1 00:05:12.704 --rc genhtml_function_coverage=1 00:05:12.704 --rc genhtml_legend=1 00:05:12.704 --rc geninfo_all_blocks=1 00:05:12.704 --rc geninfo_unexecuted_blocks=1 00:05:12.704 00:05:12.704 ' 00:05:12.704 03:32:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:12.704 03:32:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58511 00:05:12.704 03:32:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58511 00:05:12.704 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58511 ']' 00:05:12.704 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.704 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.704 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.704 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.704 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:12.704 03:32:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:12.704 [2024-10-01 03:32:05.083060] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:12.704 [2024-10-01 03:32:05.083186] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58511 ] 00:05:12.704 [2024-10-01 03:32:05.230557] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.965 [2024-10-01 03:32:05.405883] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.537 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.537 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:13.537 03:32:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:13.537 03:32:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:13.537 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:13.537 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:13.537 { 00:05:13.537 "filename": "/tmp/spdk_mem_dump.txt" 00:05:13.537 } 00:05:13.537 03:32:05 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:13.537 03:32:05 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:13.537 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:13.537 1 heaps totaling size 866.000000 MiB 00:05:13.537 size: 866.000000 MiB heap id: 0 00:05:13.537 end heaps---------- 00:05:13.537 9 mempools totaling size 642.649841 MiB 00:05:13.537 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:13.537 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:13.537 size: 92.545471 MiB name: bdev_io_58511 00:05:13.537 size: 51.011292 MiB name: evtpool_58511 00:05:13.537 size: 50.003479 MiB name: msgpool_58511 00:05:13.537 size: 36.509338 MiB name: fsdev_io_58511 00:05:13.537 size: 21.763794 MiB name: PDU_Pool 00:05:13.537 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:13.537 size: 0.026123 MiB name: Session_Pool 00:05:13.537 end mempools------- 00:05:13.537 6 memzones totaling size 4.142822 MiB 00:05:13.537 size: 1.000366 MiB name: RG_ring_0_58511 00:05:13.537 size: 1.000366 MiB name: RG_ring_1_58511 00:05:13.537 size: 1.000366 MiB name: RG_ring_4_58511 00:05:13.537 size: 1.000366 MiB name: RG_ring_5_58511 00:05:13.537 size: 0.125366 MiB name: RG_ring_2_58511 00:05:13.537 size: 0.015991 MiB name: RG_ring_3_58511 00:05:13.537 end memzones------- 00:05:13.537 03:32:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:13.800 heap id: 0 total size: 866.000000 MiB number of busy elements: 309 number of free elements: 19 00:05:13.800 list of free elements. size: 19.915039 MiB 00:05:13.800 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:13.800 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:13.800 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:13.800 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:13.800 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:13.800 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:13.800 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:13.800 element at address: 0x20001c400000 with size: 0.999084 MiB 00:05:13.800 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:13.800 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:05:13.800 element at address: 0x20001c700040 with size: 0.936401 MiB 00:05:13.800 element at address: 0x200000200000 with size: 0.832153 MiB 00:05:13.800 element at address: 0x20001de00000 with size: 0.561218 MiB 00:05:13.800 element at address: 0x200003e00000 with size: 0.490906 MiB 00:05:13.800 element at address: 0x20001c000000 with size: 0.488464 MiB 00:05:13.800 element at address: 0x20001c800000 with size: 0.485413 MiB 00:05:13.800 element at address: 0x200015e00000 with size: 0.444458 MiB 00:05:13.800 element at address: 0x20002b200000 with size: 0.390930 MiB 00:05:13.800 element at address: 0x200003a00000 with size: 0.352844 MiB 00:05:13.800 list of standard malloc elements. size: 199.286255 MiB 00:05:13.800 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:13.800 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:13.800 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:13.800 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:13.801 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:13.801 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:13.801 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:13.801 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:13.801 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:05:13.801 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:05:13.801 element at address: 0x200015dff040 with size: 0.000305 MiB 00:05:13.801 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6080 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003a7e9c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003aff700 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003e7ecc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dff180 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dff280 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dff380 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dff480 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dff580 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dff680 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dff780 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dff880 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dff980 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015e72080 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015e72180 with size: 0.000244 MiB 00:05:13.801 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c07d0c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de8fac0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de8fbc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de8fcc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de8fdc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:05:13.801 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b264140 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b264240 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26af00 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:05:13.802 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:05:13.803 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:05:13.803 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:05:13.803 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:05:13.803 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:05:13.803 list of memzone associated elements. size: 646.798706 MiB 00:05:13.803 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:13.803 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:13.803 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:13.803 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:13.803 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:13.803 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58511_0 00:05:13.803 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:13.803 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58511_0 00:05:13.803 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:13.803 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58511_0 00:05:13.803 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:13.803 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58511_0 00:05:13.803 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:13.803 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:13.803 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:13.803 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:13.803 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:13.803 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58511 00:05:13.803 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:13.803 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58511 00:05:13.803 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:13.803 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58511 00:05:13.803 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:13.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:13.803 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:13.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:13.803 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:13.803 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:13.803 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:13.803 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:13.803 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:13.803 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58511 00:05:13.803 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:13.803 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58511 00:05:13.803 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:13.803 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58511 00:05:13.803 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:13.803 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58511 00:05:13.803 element at address: 0x200003a7f4c0 with size: 0.500549 MiB 00:05:13.803 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58511 00:05:13.803 element at address: 0x200003e7edc0 with size: 0.500549 MiB 00:05:13.803 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58511 00:05:13.803 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:05:13.803 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:13.803 element at address: 0x200015e72280 with size: 0.500549 MiB 00:05:13.803 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:13.803 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:05:13.803 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:13.803 element at address: 0x200003a5e780 with size: 0.125549 MiB 00:05:13.803 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58511 00:05:13.803 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:05:13.803 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:13.803 element at address: 0x20002b264340 with size: 0.023804 MiB 00:05:13.803 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:13.803 element at address: 0x200003a5a540 with size: 0.016174 MiB 00:05:13.803 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58511 00:05:13.803 element at address: 0x20002b26a4c0 with size: 0.002502 MiB 00:05:13.803 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:13.803 element at address: 0x2000002d6180 with size: 0.000366 MiB 00:05:13.803 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58511 00:05:13.803 element at address: 0x200003aff800 with size: 0.000366 MiB 00:05:13.803 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58511 00:05:13.803 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:13.803 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58511 00:05:13.803 element at address: 0x20002b26b000 with size: 0.000366 MiB 00:05:13.803 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:13.803 03:32:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:13.803 03:32:06 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58511 00:05:13.803 03:32:06 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58511 ']' 00:05:13.803 03:32:06 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58511 00:05:13.803 03:32:06 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:13.803 03:32:06 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:13.803 03:32:06 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58511 00:05:13.803 03:32:06 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:13.803 killing process with pid 58511 00:05:13.803 03:32:06 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:13.803 03:32:06 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58511' 00:05:13.803 03:32:06 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58511 00:05:13.803 03:32:06 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58511 00:05:15.187 00:05:15.187 real 0m2.797s 00:05:15.187 user 0m2.806s 00:05:15.187 sys 0m0.400s 00:05:15.187 ************************************ 00:05:15.187 03:32:07 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.187 03:32:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:15.187 END TEST dpdk_mem_utility 00:05:15.187 ************************************ 00:05:15.187 03:32:07 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:15.187 03:32:07 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.187 03:32:07 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.187 03:32:07 -- common/autotest_common.sh@10 -- # set +x 00:05:15.187 ************************************ 00:05:15.187 START TEST event 00:05:15.187 ************************************ 00:05:15.187 03:32:07 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:15.447 * Looking for test storage... 00:05:15.447 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:15.447 03:32:07 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:15.448 03:32:07 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:15.448 03:32:07 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:15.448 03:32:07 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:15.448 03:32:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:15.448 03:32:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:15.448 03:32:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:15.448 03:32:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:15.448 03:32:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:15.448 03:32:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:15.448 03:32:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:15.448 03:32:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:15.448 03:32:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:15.448 03:32:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:15.448 03:32:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:15.448 03:32:07 event -- scripts/common.sh@344 -- # case "$op" in 00:05:15.448 03:32:07 event -- scripts/common.sh@345 -- # : 1 00:05:15.448 03:32:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:15.448 03:32:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:15.448 03:32:07 event -- scripts/common.sh@365 -- # decimal 1 00:05:15.448 03:32:07 event -- scripts/common.sh@353 -- # local d=1 00:05:15.448 03:32:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:15.448 03:32:07 event -- scripts/common.sh@355 -- # echo 1 00:05:15.448 03:32:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:15.448 03:32:07 event -- scripts/common.sh@366 -- # decimal 2 00:05:15.448 03:32:07 event -- scripts/common.sh@353 -- # local d=2 00:05:15.448 03:32:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:15.448 03:32:07 event -- scripts/common.sh@355 -- # echo 2 00:05:15.448 03:32:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:15.448 03:32:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:15.448 03:32:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:15.448 03:32:07 event -- scripts/common.sh@368 -- # return 0 00:05:15.448 03:32:07 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:15.448 03:32:07 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:15.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.448 --rc genhtml_branch_coverage=1 00:05:15.448 --rc genhtml_function_coverage=1 00:05:15.448 --rc genhtml_legend=1 00:05:15.448 --rc geninfo_all_blocks=1 00:05:15.448 --rc geninfo_unexecuted_blocks=1 00:05:15.448 00:05:15.448 ' 00:05:15.448 03:32:07 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:15.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.448 --rc genhtml_branch_coverage=1 00:05:15.448 --rc genhtml_function_coverage=1 00:05:15.448 --rc genhtml_legend=1 00:05:15.448 --rc geninfo_all_blocks=1 00:05:15.448 --rc geninfo_unexecuted_blocks=1 00:05:15.448 00:05:15.448 ' 00:05:15.448 03:32:07 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:15.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.448 --rc genhtml_branch_coverage=1 00:05:15.448 --rc genhtml_function_coverage=1 00:05:15.448 --rc genhtml_legend=1 00:05:15.448 --rc geninfo_all_blocks=1 00:05:15.448 --rc geninfo_unexecuted_blocks=1 00:05:15.448 00:05:15.448 ' 00:05:15.448 03:32:07 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:15.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:15.448 --rc genhtml_branch_coverage=1 00:05:15.448 --rc genhtml_function_coverage=1 00:05:15.448 --rc genhtml_legend=1 00:05:15.448 --rc geninfo_all_blocks=1 00:05:15.448 --rc geninfo_unexecuted_blocks=1 00:05:15.448 00:05:15.448 ' 00:05:15.448 03:32:07 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:15.448 03:32:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:15.448 03:32:07 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.448 03:32:07 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:15.448 03:32:07 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.448 03:32:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:15.448 ************************************ 00:05:15.448 START TEST event_perf 00:05:15.448 ************************************ 00:05:15.448 03:32:07 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:15.448 Running I/O for 1 seconds...[2024-10-01 03:32:07.902864] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:15.448 [2024-10-01 03:32:07.902969] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58602 ] 00:05:15.707 [2024-10-01 03:32:08.051671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:15.707 [2024-10-01 03:32:08.196683] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.707 Running I/O for 1 seconds...[2024-10-01 03:32:08.196984] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:15.707 [2024-10-01 03:32:08.197147] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.707 [2024-10-01 03:32:08.197170] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:17.094 00:05:17.094 lcore 0: 153390 00:05:17.094 lcore 1: 153387 00:05:17.094 lcore 2: 153388 00:05:17.094 lcore 3: 153390 00:05:17.094 done. 00:05:17.094 00:05:17.094 real 0m1.594s 00:05:17.094 user 0m4.382s 00:05:17.094 sys 0m0.086s 00:05:17.094 03:32:09 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:17.094 ************************************ 00:05:17.094 END TEST event_perf 00:05:17.094 ************************************ 00:05:17.094 03:32:09 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:17.094 03:32:09 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:17.094 03:32:09 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:17.094 03:32:09 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:17.094 03:32:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:17.094 ************************************ 00:05:17.094 START TEST event_reactor 00:05:17.094 ************************************ 00:05:17.094 03:32:09 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:17.094 [2024-10-01 03:32:09.559442] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:17.094 [2024-10-01 03:32:09.559556] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58642 ] 00:05:17.355 [2024-10-01 03:32:09.706708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.355 [2024-10-01 03:32:09.891743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.758 test_start 00:05:18.758 oneshot 00:05:18.758 tick 100 00:05:18.758 tick 100 00:05:18.758 tick 250 00:05:18.758 tick 100 00:05:18.758 tick 100 00:05:18.758 tick 250 00:05:18.758 tick 100 00:05:18.758 tick 500 00:05:18.758 tick 100 00:05:18.758 tick 100 00:05:18.758 tick 250 00:05:18.758 tick 100 00:05:18.758 tick 100 00:05:18.758 test_end 00:05:18.758 00:05:18.758 real 0m1.631s 00:05:18.758 user 0m1.434s 00:05:18.758 sys 0m0.087s 00:05:18.758 03:32:11 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.758 ************************************ 00:05:18.758 END TEST event_reactor 00:05:18.758 03:32:11 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:18.758 ************************************ 00:05:18.758 03:32:11 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.758 03:32:11 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:18.758 03:32:11 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:18.758 03:32:11 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.758 ************************************ 00:05:18.758 START TEST event_reactor_perf 00:05:18.758 ************************************ 00:05:18.758 03:32:11 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:18.758 [2024-10-01 03:32:11.250935] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:18.758 [2024-10-01 03:32:11.251612] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58684 ] 00:05:19.019 [2024-10-01 03:32:11.401855] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.279 [2024-10-01 03:32:11.584685] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.657 test_start 00:05:20.657 test_end 00:05:20.657 Performance: 315705 events per second 00:05:20.657 00:05:20.657 real 0m1.580s 00:05:20.657 user 0m1.389s 00:05:20.657 sys 0m0.081s 00:05:20.657 03:32:12 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.657 ************************************ 00:05:20.657 END TEST event_reactor_perf 00:05:20.657 ************************************ 00:05:20.657 03:32:12 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:20.657 03:32:12 event -- event/event.sh@49 -- # uname -s 00:05:20.657 03:32:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:20.657 03:32:12 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:20.657 03:32:12 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.657 03:32:12 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.657 03:32:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.657 ************************************ 00:05:20.657 START TEST event_scheduler 00:05:20.657 ************************************ 00:05:20.657 03:32:12 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:20.657 * Looking for test storage... 00:05:20.657 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:20.657 03:32:12 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:20.657 03:32:12 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:20.657 03:32:12 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:20.657 03:32:13 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:20.657 03:32:13 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.657 03:32:13 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.657 03:32:13 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.657 03:32:13 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.657 03:32:13 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.657 03:32:13 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.658 03:32:13 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:20.658 03:32:13 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.658 03:32:13 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:20.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.658 --rc genhtml_branch_coverage=1 00:05:20.658 --rc genhtml_function_coverage=1 00:05:20.658 --rc genhtml_legend=1 00:05:20.658 --rc geninfo_all_blocks=1 00:05:20.658 --rc geninfo_unexecuted_blocks=1 00:05:20.658 00:05:20.658 ' 00:05:20.658 03:32:13 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:20.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.658 --rc genhtml_branch_coverage=1 00:05:20.658 --rc genhtml_function_coverage=1 00:05:20.658 --rc genhtml_legend=1 00:05:20.658 --rc geninfo_all_blocks=1 00:05:20.658 --rc geninfo_unexecuted_blocks=1 00:05:20.658 00:05:20.658 ' 00:05:20.658 03:32:13 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:20.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.658 --rc genhtml_branch_coverage=1 00:05:20.658 --rc genhtml_function_coverage=1 00:05:20.658 --rc genhtml_legend=1 00:05:20.658 --rc geninfo_all_blocks=1 00:05:20.658 --rc geninfo_unexecuted_blocks=1 00:05:20.658 00:05:20.658 ' 00:05:20.658 03:32:13 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:20.658 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.658 --rc genhtml_branch_coverage=1 00:05:20.658 --rc genhtml_function_coverage=1 00:05:20.658 --rc genhtml_legend=1 00:05:20.658 --rc geninfo_all_blocks=1 00:05:20.658 --rc geninfo_unexecuted_blocks=1 00:05:20.658 00:05:20.658 ' 00:05:20.658 03:32:13 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:20.658 03:32:13 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58754 00:05:20.658 03:32:13 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:20.658 03:32:13 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58754 00:05:20.658 03:32:13 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58754 ']' 00:05:20.658 03:32:13 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.658 03:32:13 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:20.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.658 03:32:13 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.658 03:32:13 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:20.658 03:32:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:20.658 03:32:13 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:20.658 [2024-10-01 03:32:13.083429] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:20.658 [2024-10-01 03:32:13.083572] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58754 ] 00:05:20.919 [2024-10-01 03:32:13.236976] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:20.919 [2024-10-01 03:32:13.458308] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.919 [2024-10-01 03:32:13.458670] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:20.919 [2024-10-01 03:32:13.458853] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:05:20.919 [2024-10-01 03:32:13.458993] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.488 03:32:13 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:21.488 03:32:13 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:21.488 03:32:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:21.488 03:32:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.488 03:32:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.488 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.488 POWER: Cannot set governor of lcore 0 to userspace 00:05:21.489 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.489 POWER: Cannot set governor of lcore 0 to performance 00:05:21.489 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.489 POWER: Cannot set governor of lcore 0 to userspace 00:05:21.489 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:21.489 POWER: Cannot set governor of lcore 0 to userspace 00:05:21.489 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:21.489 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:21.489 POWER: Unable to set Power Management Environment for lcore 0 00:05:21.489 [2024-10-01 03:32:13.936614] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:21.489 [2024-10-01 03:32:13.936636] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:21.489 [2024-10-01 03:32:13.936645] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:21.489 [2024-10-01 03:32:13.936663] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:21.489 [2024-10-01 03:32:13.936670] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:21.489 [2024-10-01 03:32:13.936679] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:21.489 03:32:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.489 03:32:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:21.489 03:32:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.489 03:32:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.747 [2024-10-01 03:32:14.162652] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:21.747 03:32:14 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.747 03:32:14 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:21.747 03:32:14 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:21.747 03:32:14 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:21.747 03:32:14 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:21.747 ************************************ 00:05:21.747 START TEST scheduler_create_thread 00:05:21.747 ************************************ 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.747 2 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.747 3 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.747 4 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.747 5 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.747 6 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.747 7 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.747 8 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.747 9 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.747 10 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.747 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:21.748 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.748 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:21.748 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.748 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:21.748 03:32:14 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:21.748 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:21.748 03:32:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:22.680 03:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:22.680 03:32:15 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:22.680 03:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:22.680 03:32:15 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.053 03:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:24.053 03:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:24.053 03:32:16 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:24.053 03:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:24.053 03:32:16 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.429 03:32:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.429 ************************************ 00:05:25.429 END TEST scheduler_create_thread 00:05:25.429 ************************************ 00:05:25.429 00:05:25.429 real 0m3.379s 00:05:25.429 user 0m0.014s 00:05:25.429 sys 0m0.006s 00:05:25.429 03:32:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.429 03:32:17 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:25.429 03:32:17 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:25.429 03:32:17 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58754 00:05:25.429 03:32:17 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58754 ']' 00:05:25.429 03:32:17 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58754 00:05:25.429 03:32:17 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:25.429 03:32:17 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.429 03:32:17 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58754 00:05:25.429 killing process with pid 58754 00:05:25.429 03:32:17 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:25.429 03:32:17 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:25.429 03:32:17 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58754' 00:05:25.429 03:32:17 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58754 00:05:25.429 03:32:17 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58754 00:05:25.429 [2024-10-01 03:32:17.930820] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:26.363 ************************************ 00:05:26.363 END TEST event_scheduler 00:05:26.363 ************************************ 00:05:26.363 00:05:26.363 real 0m5.773s 00:05:26.363 user 0m11.291s 00:05:26.363 sys 0m0.387s 00:05:26.363 03:32:18 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.363 03:32:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.363 03:32:18 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:26.363 03:32:18 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:26.363 03:32:18 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.363 03:32:18 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.363 03:32:18 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.363 ************************************ 00:05:26.363 START TEST app_repeat 00:05:26.363 ************************************ 00:05:26.363 03:32:18 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:26.363 Process app_repeat pid: 58866 00:05:26.363 spdk_app_start Round 0 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58866 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58866' 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:26.363 03:32:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58866 /var/tmp/spdk-nbd.sock 00:05:26.363 03:32:18 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58866 ']' 00:05:26.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.363 03:32:18 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.364 03:32:18 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:26.364 03:32:18 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:26.364 03:32:18 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.364 03:32:18 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:26.364 03:32:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.364 [2024-10-01 03:32:18.733439] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:26.364 [2024-10-01 03:32:18.733973] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58866 ] 00:05:26.364 [2024-10-01 03:32:18.880567] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.667 [2024-10-01 03:32:19.033914] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.667 [2024-10-01 03:32:19.033929] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.248 03:32:19 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:27.248 03:32:19 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:27.248 03:32:19 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.248 Malloc0 00:05:27.506 03:32:19 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.765 Malloc1 00:05:27.765 03:32:20 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.765 /dev/nbd0 00:05:27.765 03:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:28.023 03:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.023 1+0 records in 00:05:28.023 1+0 records out 00:05:28.023 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000295809 s, 13.8 MB/s 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.023 03:32:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.023 03:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.023 03:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.024 03:32:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.024 /dev/nbd1 00:05:28.024 03:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.024 03:32:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.024 1+0 records in 00:05:28.024 1+0 records out 00:05:28.024 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279484 s, 14.7 MB/s 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:28.024 03:32:20 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:28.024 03:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.024 03:32:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.282 { 00:05:28.282 "nbd_device": "/dev/nbd0", 00:05:28.282 "bdev_name": "Malloc0" 00:05:28.282 }, 00:05:28.282 { 00:05:28.282 "nbd_device": "/dev/nbd1", 00:05:28.282 "bdev_name": "Malloc1" 00:05:28.282 } 00:05:28.282 ]' 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.282 { 00:05:28.282 "nbd_device": "/dev/nbd0", 00:05:28.282 "bdev_name": "Malloc0" 00:05:28.282 }, 00:05:28.282 { 00:05:28.282 "nbd_device": "/dev/nbd1", 00:05:28.282 "bdev_name": "Malloc1" 00:05:28.282 } 00:05:28.282 ]' 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.282 /dev/nbd1' 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.282 /dev/nbd1' 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.282 256+0 records in 00:05:28.282 256+0 records out 00:05:28.282 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00733128 s, 143 MB/s 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.282 03:32:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.541 256+0 records in 00:05:28.541 256+0 records out 00:05:28.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149978 s, 69.9 MB/s 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.541 256+0 records in 00:05:28.541 256+0 records out 00:05:28.541 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0171152 s, 61.3 MB/s 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.541 03:32:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.541 03:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.541 03:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.541 03:32:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.541 03:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.541 03:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.541 03:32:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.541 03:32:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.541 03:32:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.541 03:32:21 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.541 03:32:21 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.800 03:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.800 03:32:21 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.800 03:32:21 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.800 03:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.800 03:32:21 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.800 03:32:21 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.800 03:32:21 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.800 03:32:21 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.800 03:32:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.800 03:32:21 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.800 03:32:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:29.058 03:32:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:29.059 03:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:29.059 03:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:29.059 03:32:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:29.059 03:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:29.059 03:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:29.059 03:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:29.059 03:32:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:29.059 03:32:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:29.059 03:32:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:29.059 03:32:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:29.059 03:32:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:29.059 03:32:21 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.317 03:32:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:30.250 [2024-10-01 03:32:22.453743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:30.250 [2024-10-01 03:32:22.587112] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.250 [2024-10-01 03:32:22.587219] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.250 [2024-10-01 03:32:22.685368] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:30.250 [2024-10-01 03:32:22.685430] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.781 spdk_app_start Round 1 00:05:32.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.781 03:32:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.781 03:32:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:32.781 03:32:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58866 /var/tmp/spdk-nbd.sock 00:05:32.781 03:32:24 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58866 ']' 00:05:32.781 03:32:24 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.781 03:32:24 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.781 03:32:24 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.781 03:32:24 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.781 03:32:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.781 03:32:25 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.781 03:32:25 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:32.781 03:32:25 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.781 Malloc0 00:05:32.781 03:32:25 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:33.039 Malloc1 00:05:33.039 03:32:25 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.039 03:32:25 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.039 03:32:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.039 03:32:25 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:33.039 03:32:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.039 03:32:25 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:33.039 03:32:25 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:33.039 03:32:25 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.039 03:32:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:33.039 03:32:25 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:33.040 03:32:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.040 03:32:25 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:33.040 03:32:25 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:33.040 03:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:33.040 03:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.040 03:32:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:33.297 /dev/nbd0 00:05:33.297 03:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:33.297 03:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.297 1+0 records in 00:05:33.297 1+0 records out 00:05:33.297 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420494 s, 9.7 MB/s 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.297 03:32:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.297 03:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.297 03:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.297 03:32:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.555 /dev/nbd1 00:05:33.555 03:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.555 03:32:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.555 1+0 records in 00:05:33.555 1+0 records out 00:05:33.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193388 s, 21.2 MB/s 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:33.555 03:32:25 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:33.555 03:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.555 03:32:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.555 03:32:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.555 03:32:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.555 03:32:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.814 { 00:05:33.814 "nbd_device": "/dev/nbd0", 00:05:33.814 "bdev_name": "Malloc0" 00:05:33.814 }, 00:05:33.814 { 00:05:33.814 "nbd_device": "/dev/nbd1", 00:05:33.814 "bdev_name": "Malloc1" 00:05:33.814 } 00:05:33.814 ]' 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.814 { 00:05:33.814 "nbd_device": "/dev/nbd0", 00:05:33.814 "bdev_name": "Malloc0" 00:05:33.814 }, 00:05:33.814 { 00:05:33.814 "nbd_device": "/dev/nbd1", 00:05:33.814 "bdev_name": "Malloc1" 00:05:33.814 } 00:05:33.814 ]' 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.814 /dev/nbd1' 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.814 /dev/nbd1' 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.814 256+0 records in 00:05:33.814 256+0 records out 00:05:33.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00420784 s, 249 MB/s 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.814 256+0 records in 00:05:33.814 256+0 records out 00:05:33.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0119189 s, 88.0 MB/s 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.814 256+0 records in 00:05:33.814 256+0 records out 00:05:33.814 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225776 s, 46.4 MB/s 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.814 03:32:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:34.072 03:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:34.072 03:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:34.072 03:32:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:34.072 03:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.072 03:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.072 03:32:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:34.072 03:32:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.072 03:32:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.072 03:32:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:34.072 03:32:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:34.333 03:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:34.333 03:32:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:34.333 03:32:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:34.333 03:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:34.333 03:32:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:34.333 03:32:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:34.333 03:32:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:34.333 03:32:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:34.333 03:32:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:34.333 03:32:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:34.333 03:32:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.594 03:32:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.594 03:32:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.854 03:32:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:35.795 [2024-10-01 03:32:28.072345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:35.795 [2024-10-01 03:32:28.242553] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.795 [2024-10-01 03:32:28.242616] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.055 [2024-10-01 03:32:28.369245] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:36.055 [2024-10-01 03:32:28.369398] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.958 spdk_app_start Round 2 00:05:37.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.958 03:32:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.958 03:32:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:37.958 03:32:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58866 /var/tmp/spdk-nbd.sock 00:05:37.958 03:32:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58866 ']' 00:05:37.958 03:32:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.958 03:32:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.958 03:32:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.958 03:32:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.958 03:32:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.958 03:32:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:37.958 03:32:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:37.958 03:32:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.216 Malloc0 00:05:38.216 03:32:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:38.475 Malloc1 00:05:38.475 03:32:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.475 03:32:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:38.733 /dev/nbd0 00:05:38.733 03:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:38.733 03:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:38.733 03:32:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:38.733 03:32:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:38.733 03:32:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:38.733 03:32:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:38.733 03:32:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:38.733 03:32:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:38.733 03:32:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:38.733 03:32:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:38.734 03:32:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.734 1+0 records in 00:05:38.734 1+0 records out 00:05:38.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000429644 s, 9.5 MB/s 00:05:38.734 03:32:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.734 03:32:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:38.734 03:32:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.734 03:32:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:38.734 03:32:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:38.734 03:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.734 03:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.734 03:32:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.992 /dev/nbd1 00:05:38.992 03:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.992 03:32:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.992 1+0 records in 00:05:38.992 1+0 records out 00:05:38.992 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262304 s, 15.6 MB/s 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:38.992 03:32:31 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:38.992 03:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.992 03:32:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.993 03:32:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.993 03:32:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.993 03:32:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:39.251 { 00:05:39.251 "nbd_device": "/dev/nbd0", 00:05:39.251 "bdev_name": "Malloc0" 00:05:39.251 }, 00:05:39.251 { 00:05:39.251 "nbd_device": "/dev/nbd1", 00:05:39.251 "bdev_name": "Malloc1" 00:05:39.251 } 00:05:39.251 ]' 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:39.251 { 00:05:39.251 "nbd_device": "/dev/nbd0", 00:05:39.251 "bdev_name": "Malloc0" 00:05:39.251 }, 00:05:39.251 { 00:05:39.251 "nbd_device": "/dev/nbd1", 00:05:39.251 "bdev_name": "Malloc1" 00:05:39.251 } 00:05:39.251 ]' 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:39.251 /dev/nbd1' 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:39.251 /dev/nbd1' 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:39.251 256+0 records in 00:05:39.251 256+0 records out 00:05:39.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0075307 s, 139 MB/s 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:39.251 256+0 records in 00:05:39.251 256+0 records out 00:05:39.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0182719 s, 57.4 MB/s 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:39.251 256+0 records in 00:05:39.251 256+0 records out 00:05:39.251 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175554 s, 59.7 MB/s 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:39.251 03:32:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:39.252 03:32:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:39.252 03:32:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.252 03:32:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.252 03:32:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:39.252 03:32:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:39.252 03:32:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.252 03:32:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:39.510 03:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:39.510 03:32:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:39.510 03:32:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:39.510 03:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.510 03:32:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.510 03:32:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:39.510 03:32:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.510 03:32:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.510 03:32:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:39.510 03:32:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:39.768 03:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:39.768 03:32:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:39.768 03:32:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:39.768 03:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:39.768 03:32:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:39.768 03:32:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:39.768 03:32:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:39.768 03:32:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:39.768 03:32:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:39.768 03:32:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.768 03:32:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:40.027 03:32:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:40.027 03:32:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:40.286 03:32:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:40.853 [2024-10-01 03:32:33.269793] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:40.853 [2024-10-01 03:32:33.395599] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.853 [2024-10-01 03:32:33.395674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.111 [2024-10-01 03:32:33.492775] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.111 [2024-10-01 03:32:33.492825] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:43.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:43.638 03:32:35 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58866 /var/tmp/spdk-nbd.sock 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58866 ']' 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:43.638 03:32:35 event.app_repeat -- event/event.sh@39 -- # killprocess 58866 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58866 ']' 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58866 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58866 00:05:43.638 killing process with pid 58866 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58866' 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58866 00:05:43.638 03:32:35 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58866 00:05:44.205 spdk_app_start is called in Round 0. 00:05:44.205 Shutdown signal received, stop current app iteration 00:05:44.205 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:44.205 spdk_app_start is called in Round 1. 00:05:44.205 Shutdown signal received, stop current app iteration 00:05:44.205 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:44.205 spdk_app_start is called in Round 2. 00:05:44.205 Shutdown signal received, stop current app iteration 00:05:44.205 Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 reinitialization... 00:05:44.205 spdk_app_start is called in Round 3. 00:05:44.205 Shutdown signal received, stop current app iteration 00:05:44.205 ************************************ 00:05:44.205 END TEST app_repeat 00:05:44.205 ************************************ 00:05:44.205 03:32:36 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:44.205 03:32:36 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:44.205 00:05:44.205 real 0m17.790s 00:05:44.205 user 0m38.435s 00:05:44.205 sys 0m1.982s 00:05:44.205 03:32:36 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:44.205 03:32:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:44.205 03:32:36 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:44.205 03:32:36 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:44.205 03:32:36 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.205 03:32:36 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.205 03:32:36 event -- common/autotest_common.sh@10 -- # set +x 00:05:44.205 ************************************ 00:05:44.205 START TEST cpu_locks 00:05:44.205 ************************************ 00:05:44.205 03:32:36 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:44.205 * Looking for test storage... 00:05:44.205 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:44.205 03:32:36 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:44.205 03:32:36 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:05:44.205 03:32:36 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:44.205 03:32:36 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.205 03:32:36 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:44.205 03:32:36 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.205 03:32:36 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:44.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.205 --rc genhtml_branch_coverage=1 00:05:44.205 --rc genhtml_function_coverage=1 00:05:44.205 --rc genhtml_legend=1 00:05:44.205 --rc geninfo_all_blocks=1 00:05:44.205 --rc geninfo_unexecuted_blocks=1 00:05:44.205 00:05:44.205 ' 00:05:44.205 03:32:36 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:44.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.205 --rc genhtml_branch_coverage=1 00:05:44.205 --rc genhtml_function_coverage=1 00:05:44.205 --rc genhtml_legend=1 00:05:44.205 --rc geninfo_all_blocks=1 00:05:44.205 --rc geninfo_unexecuted_blocks=1 00:05:44.205 00:05:44.205 ' 00:05:44.205 03:32:36 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:44.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.205 --rc genhtml_branch_coverage=1 00:05:44.205 --rc genhtml_function_coverage=1 00:05:44.205 --rc genhtml_legend=1 00:05:44.205 --rc geninfo_all_blocks=1 00:05:44.205 --rc geninfo_unexecuted_blocks=1 00:05:44.205 00:05:44.205 ' 00:05:44.205 03:32:36 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:44.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.205 --rc genhtml_branch_coverage=1 00:05:44.205 --rc genhtml_function_coverage=1 00:05:44.205 --rc genhtml_legend=1 00:05:44.206 --rc geninfo_all_blocks=1 00:05:44.206 --rc geninfo_unexecuted_blocks=1 00:05:44.206 00:05:44.206 ' 00:05:44.206 03:32:36 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:44.206 03:32:36 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:44.206 03:32:36 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:44.206 03:32:36 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:44.206 03:32:36 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:44.206 03:32:36 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:44.206 03:32:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.206 ************************************ 00:05:44.206 START TEST default_locks 00:05:44.206 ************************************ 00:05:44.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.206 03:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:44.206 03:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59302 00:05:44.206 03:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59302 00:05:44.206 03:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59302 ']' 00:05:44.206 03:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.206 03:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:44.206 03:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.206 03:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:44.206 03:32:36 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.206 03:32:36 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.466 [2024-10-01 03:32:36.768709] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:44.466 [2024-10-01 03:32:36.768970] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59302 ] 00:05:44.466 [2024-10-01 03:32:36.911447] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.726 [2024-10-01 03:32:37.093295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.295 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:45.295 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:45.295 03:32:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59302 00:05:45.295 03:32:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59302 00:05:45.295 03:32:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.555 03:32:37 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59302 00:05:45.555 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 59302 ']' 00:05:45.555 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 59302 00:05:45.555 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:45.555 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:45.555 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59302 00:05:45.555 killing process with pid 59302 00:05:45.555 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:45.555 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:45.555 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59302' 00:05:45.555 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 59302 00:05:45.555 03:32:37 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 59302 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59302 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59302 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:46.933 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59302 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 59302 ']' 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.933 ERROR: process (pid: 59302) is no longer running 00:05:46.933 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59302) - No such process 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:46.933 03:32:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:46.933 00:05:46.933 real 0m2.654s 00:05:46.933 user 0m2.635s 00:05:46.933 sys 0m0.480s 00:05:46.934 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.934 03:32:39 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.934 ************************************ 00:05:46.934 END TEST default_locks 00:05:46.934 ************************************ 00:05:46.934 03:32:39 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:46.934 03:32:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.934 03:32:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.934 03:32:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.934 ************************************ 00:05:46.934 START TEST default_locks_via_rpc 00:05:46.934 ************************************ 00:05:46.934 03:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:46.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.934 03:32:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59355 00:05:46.934 03:32:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59355 00:05:46.934 03:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59355 ']' 00:05:46.934 03:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.934 03:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.934 03:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.934 03:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.934 03:32:39 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.934 03:32:39 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.934 [2024-10-01 03:32:39.475987] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:46.934 [2024-10-01 03:32:39.476116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59355 ] 00:05:47.190 [2024-10-01 03:32:39.623086] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.447 [2024-10-01 03:32:39.768918] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.010 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59355 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59355 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59355 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 59355 ']' 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 59355 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59355 00:05:48.011 killing process with pid 59355 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59355' 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 59355 00:05:48.011 03:32:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 59355 00:05:49.383 ************************************ 00:05:49.383 END TEST default_locks_via_rpc 00:05:49.383 ************************************ 00:05:49.383 00:05:49.383 real 0m2.393s 00:05:49.383 user 0m2.394s 00:05:49.383 sys 0m0.442s 00:05:49.383 03:32:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.383 03:32:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.383 03:32:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:49.383 03:32:41 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:49.383 03:32:41 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.384 03:32:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.384 ************************************ 00:05:49.384 START TEST non_locking_app_on_locked_coremask 00:05:49.384 ************************************ 00:05:49.384 03:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:49.384 03:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59418 00:05:49.384 03:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59418 /var/tmp/spdk.sock 00:05:49.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.384 03:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59418 ']' 00:05:49.384 03:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.384 03:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:49.384 03:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.384 03:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:49.384 03:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.384 03:32:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.384 [2024-10-01 03:32:41.929633] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:49.384 [2024-10-01 03:32:41.929941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59418 ] 00:05:49.711 [2024-10-01 03:32:42.078922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.987 [2024-10-01 03:32:42.264352] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:50.559 03:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:50.559 03:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:50.559 03:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59434 00:05:50.559 03:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59434 /var/tmp/spdk2.sock 00:05:50.559 03:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59434 ']' 00:05:50.559 03:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:50.559 03:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:50.559 03:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:50.559 03:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:50.559 03:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:50.559 03:32:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:50.559 [2024-10-01 03:32:42.930961] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:50.559 [2024-10-01 03:32:42.931298] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59434 ] 00:05:50.559 [2024-10-01 03:32:43.084734] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.559 [2024-10-01 03:32:43.084783] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.131 [2024-10-01 03:32:43.454312] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.074 03:32:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.074 03:32:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:52.074 03:32:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59418 00:05:52.074 03:32:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59418 00:05:52.074 03:32:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.646 03:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59418 00:05:52.646 03:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59418 ']' 00:05:52.646 03:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59418 00:05:52.646 03:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:52.646 03:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:52.646 03:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59418 00:05:52.646 killing process with pid 59418 00:05:52.646 03:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:52.646 03:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:52.646 03:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59418' 00:05:52.646 03:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59418 00:05:52.646 03:32:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59418 00:05:55.932 03:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59434 00:05:55.932 03:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59434 ']' 00:05:55.932 03:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59434 00:05:55.932 03:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:55.932 03:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:55.932 03:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59434 00:05:55.932 killing process with pid 59434 00:05:55.932 03:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:55.932 03:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:55.932 03:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59434' 00:05:55.932 03:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59434 00:05:55.932 03:32:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59434 00:05:56.869 ************************************ 00:05:56.869 END TEST non_locking_app_on_locked_coremask 00:05:56.869 ************************************ 00:05:56.869 00:05:56.869 real 0m7.291s 00:05:56.869 user 0m7.529s 00:05:56.869 sys 0m0.917s 00:05:56.869 03:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:56.869 03:32:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.869 03:32:49 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:56.869 03:32:49 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:56.869 03:32:49 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:56.869 03:32:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.869 ************************************ 00:05:56.869 START TEST locking_app_on_unlocked_coremask 00:05:56.869 ************************************ 00:05:56.869 03:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:56.869 03:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59536 00:05:56.869 03:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59536 /var/tmp/spdk.sock 00:05:56.869 03:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59536 ']' 00:05:56.869 03:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.869 03:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:56.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.869 03:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.869 03:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:56.869 03:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.869 03:32:49 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:56.869 [2024-10-01 03:32:49.282671] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:56.869 [2024-10-01 03:32:49.283370] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59536 ] 00:05:57.131 [2024-10-01 03:32:49.434403] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.131 [2024-10-01 03:32:49.434590] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.131 [2024-10-01 03:32:49.617499] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.701 03:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:57.701 03:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:57.701 03:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59552 00:05:57.701 03:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:57.701 03:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59552 /var/tmp/spdk2.sock 00:05:57.701 03:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59552 ']' 00:05:57.701 03:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.701 03:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:57.701 03:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.701 03:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:57.701 03:32:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:57.961 [2024-10-01 03:32:50.283143] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:05:57.961 [2024-10-01 03:32:50.283411] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59552 ] 00:05:57.961 [2024-10-01 03:32:50.437028] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.533 [2024-10-01 03:32:50.806923] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.472 03:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:59.472 03:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:59.472 03:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59552 00:05:59.472 03:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59552 00:05:59.472 03:32:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.039 03:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59536 00:06:00.039 03:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59536 ']' 00:06:00.039 03:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59536 00:06:00.039 03:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:00.039 03:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:00.039 03:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59536 00:06:00.039 killing process with pid 59536 00:06:00.039 03:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:00.039 03:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:00.039 03:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59536' 00:06:00.039 03:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59536 00:06:00.039 03:32:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59536 00:06:02.568 03:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59552 00:06:02.568 03:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59552 ']' 00:06:02.568 03:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59552 00:06:02.568 03:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:02.568 03:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:02.568 03:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59552 00:06:02.568 killing process with pid 59552 00:06:02.568 03:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:02.568 03:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:02.568 03:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59552' 00:06:02.568 03:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59552 00:06:02.568 03:32:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59552 00:06:03.943 ************************************ 00:06:03.943 END TEST locking_app_on_unlocked_coremask 00:06:03.943 ************************************ 00:06:03.943 00:06:03.943 real 0m6.877s 00:06:03.943 user 0m7.109s 00:06:03.943 sys 0m0.865s 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.943 03:32:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:03.943 03:32:56 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:03.943 03:32:56 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:03.943 03:32:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:03.943 ************************************ 00:06:03.943 START TEST locking_app_on_locked_coremask 00:06:03.943 ************************************ 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:03.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59649 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59649 /var/tmp/spdk.sock 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59649 ']' 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:03.943 03:32:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:03.943 [2024-10-01 03:32:56.205585] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:03.943 [2024-10-01 03:32:56.205702] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59649 ] 00:06:03.943 [2024-10-01 03:32:56.354278] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.201 [2024-10-01 03:32:56.496792] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59665 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59665 /var/tmp/spdk2.sock 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59665 /var/tmp/spdk2.sock 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:04.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59665 /var/tmp/spdk2.sock 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59665 ']' 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:04.768 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.768 [2024-10-01 03:32:57.114326] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:04.768 [2024-10-01 03:32:57.114592] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59665 ] 00:06:04.768 [2024-10-01 03:32:57.262097] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59649 has claimed it. 00:06:04.768 [2024-10-01 03:32:57.262142] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.350 ERROR: process (pid: 59665) is no longer running 00:06:05.350 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59665) - No such process 00:06:05.350 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.350 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:05.350 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:05.350 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:05.350 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:05.350 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:05.350 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59649 00:06:05.350 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59649 00:06:05.350 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.623 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59649 00:06:05.623 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59649 ']' 00:06:05.623 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59649 00:06:05.623 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:05.623 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:05.623 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59649 00:06:05.623 killing process with pid 59649 00:06:05.623 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:05.623 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:05.623 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59649' 00:06:05.623 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59649 00:06:05.623 03:32:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59649 00:06:07.000 ************************************ 00:06:07.000 END TEST locking_app_on_locked_coremask 00:06:07.000 ************************************ 00:06:07.000 00:06:07.000 real 0m3.090s 00:06:07.000 user 0m3.314s 00:06:07.000 sys 0m0.540s 00:06:07.000 03:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:07.000 03:32:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.000 03:32:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:07.000 03:32:59 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:07.000 03:32:59 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:07.000 03:32:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.000 ************************************ 00:06:07.000 START TEST locking_overlapped_coremask 00:06:07.000 ************************************ 00:06:07.000 03:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:07.000 03:32:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59718 00:06:07.000 03:32:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59718 /var/tmp/spdk.sock 00:06:07.000 03:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59718 ']' 00:06:07.000 03:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.000 03:32:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:07.000 03:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.000 03:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.000 03:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.000 03:32:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.000 [2024-10-01 03:32:59.357034] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:07.000 [2024-10-01 03:32:59.357148] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59718 ] 00:06:07.000 [2024-10-01 03:32:59.507677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.260 [2024-10-01 03:32:59.651101] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.260 [2024-10-01 03:32:59.651548] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.260 [2024-10-01 03:32:59.651600] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59736 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59736 /var/tmp/spdk2.sock 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59736 /var/tmp/spdk2.sock 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59736 /var/tmp/spdk2.sock 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59736 ']' 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:07.828 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.828 [2024-10-01 03:33:00.283976] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:07.828 [2024-10-01 03:33:00.284279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59736 ] 00:06:08.087 [2024-10-01 03:33:00.440110] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59718 has claimed it. 00:06:08.087 [2024-10-01 03:33:00.440270] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:08.652 ERROR: process (pid: 59736) is no longer running 00:06:08.652 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59736) - No such process 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59718 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59718 ']' 00:06:08.652 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59718 00:06:08.653 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:08.653 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:08.653 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59718 00:06:08.653 killing process with pid 59718 00:06:08.653 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:08.653 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:08.653 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59718' 00:06:08.653 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59718 00:06:08.653 03:33:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59718 00:06:10.031 ************************************ 00:06:10.031 END TEST locking_overlapped_coremask 00:06:10.031 00:06:10.031 real 0m2.921s 00:06:10.031 user 0m7.710s 00:06:10.031 sys 0m0.443s 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:10.031 ************************************ 00:06:10.031 03:33:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:10.031 03:33:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.031 03:33:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.031 03:33:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.031 ************************************ 00:06:10.031 START TEST locking_overlapped_coremask_via_rpc 00:06:10.031 ************************************ 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59789 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59789 /var/tmp/spdk.sock 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59789 ']' 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.031 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.031 03:33:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.031 [2024-10-01 03:33:02.329637] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:10.031 [2024-10-01 03:33:02.329756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59789 ] 00:06:10.031 [2024-10-01 03:33:02.482307] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:10.031 [2024-10-01 03:33:02.482453] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:10.292 [2024-10-01 03:33:02.665042] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.292 [2024-10-01 03:33:02.665608] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.292 [2024-10-01 03:33:02.665692] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.862 03:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.862 03:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:10.862 03:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59807 00:06:10.862 03:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59807 /var/tmp/spdk2.sock 00:06:10.862 03:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59807 ']' 00:06:10.862 03:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:10.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:10.862 03:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.862 03:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:10.862 03:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.862 03:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.862 03:33:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:10.862 [2024-10-01 03:33:03.339824] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:10.862 [2024-10-01 03:33:03.340322] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59807 ] 00:06:11.122 [2024-10-01 03:33:03.497130] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:11.122 [2024-10-01 03:33:03.497293] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:11.382 [2024-10-01 03:33:03.880387] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:06:11.382 [2024-10-01 03:33:03.880507] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 4 00:06:11.382 [2024-10-01 03:33:03.880658] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:12.758 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.758 [2024-10-01 03:33:05.057158] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59789 has claimed it. 00:06:12.758 request: 00:06:12.758 { 00:06:12.758 "method": "framework_enable_cpumask_locks", 00:06:12.758 "req_id": 1 00:06:12.758 } 00:06:12.759 Got JSON-RPC error response 00:06:12.759 response: 00:06:12.759 { 00:06:12.759 "code": -32603, 00:06:12.759 "message": "Failed to claim CPU core: 2" 00:06:12.759 } 00:06:12.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59789 /var/tmp/spdk.sock 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59789 ']' 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59807 /var/tmp/spdk2.sock 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59807 ']' 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:12.759 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.016 ************************************ 00:06:13.016 END TEST locking_overlapped_coremask_via_rpc 00:06:13.016 ************************************ 00:06:13.016 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:13.016 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:13.016 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:13.016 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:13.016 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:13.016 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:13.016 00:06:13.016 real 0m3.231s 00:06:13.016 user 0m1.021s 00:06:13.016 sys 0m0.137s 00:06:13.016 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.016 03:33:05 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:13.016 03:33:05 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:13.016 03:33:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59789 ]] 00:06:13.016 03:33:05 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59789 00:06:13.016 03:33:05 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59789 ']' 00:06:13.016 03:33:05 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59789 00:06:13.016 03:33:05 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:13.016 03:33:05 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:13.016 03:33:05 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59789 00:06:13.016 killing process with pid 59789 00:06:13.016 03:33:05 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.016 03:33:05 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.016 03:33:05 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59789' 00:06:13.016 03:33:05 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59789 00:06:13.016 03:33:05 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59789 00:06:14.385 03:33:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59807 ]] 00:06:14.385 03:33:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59807 00:06:14.385 03:33:06 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59807 ']' 00:06:14.385 03:33:06 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59807 00:06:14.385 03:33:06 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:06:14.385 03:33:06 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.385 03:33:06 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59807 00:06:14.385 killing process with pid 59807 00:06:14.385 03:33:06 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:14.385 03:33:06 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:14.385 03:33:06 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59807' 00:06:14.385 03:33:06 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59807 00:06:14.385 03:33:06 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59807 00:06:15.918 03:33:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.918 03:33:08 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:15.918 03:33:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59789 ]] 00:06:15.918 03:33:08 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59789 00:06:15.918 Process with pid 59789 is not found 00:06:15.918 03:33:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59789 ']' 00:06:15.918 03:33:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59789 00:06:15.918 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59789) - No such process 00:06:15.918 03:33:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59789 is not found' 00:06:15.918 03:33:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59807 ]] 00:06:15.918 03:33:08 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59807 00:06:15.918 03:33:08 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59807 ']' 00:06:15.918 Process with pid 59807 is not found 00:06:15.918 03:33:08 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59807 00:06:15.918 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59807) - No such process 00:06:15.918 03:33:08 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59807 is not found' 00:06:15.918 03:33:08 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:15.918 ************************************ 00:06:15.918 END TEST cpu_locks 00:06:15.918 ************************************ 00:06:15.918 00:06:15.918 real 0m31.529s 00:06:15.918 user 0m52.928s 00:06:15.918 sys 0m4.680s 00:06:15.918 03:33:08 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.918 03:33:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.918 ************************************ 00:06:15.918 END TEST event 00:06:15.918 ************************************ 00:06:15.918 00:06:15.918 real 1m0.403s 00:06:15.918 user 1m50.024s 00:06:15.918 sys 0m7.547s 00:06:15.918 03:33:08 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:15.918 03:33:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.918 03:33:08 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:15.918 03:33:08 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:15.918 03:33:08 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.918 03:33:08 -- common/autotest_common.sh@10 -- # set +x 00:06:15.918 ************************************ 00:06:15.918 START TEST thread 00:06:15.918 ************************************ 00:06:15.918 03:33:08 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:15.918 * Looking for test storage... 00:06:15.918 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:15.918 03:33:08 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:15.918 03:33:08 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:15.918 03:33:08 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:06:15.918 03:33:08 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:15.918 03:33:08 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.918 03:33:08 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.918 03:33:08 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.918 03:33:08 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.918 03:33:08 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.918 03:33:08 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.918 03:33:08 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.918 03:33:08 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.918 03:33:08 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.918 03:33:08 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.918 03:33:08 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.918 03:33:08 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:15.918 03:33:08 thread -- scripts/common.sh@345 -- # : 1 00:06:15.918 03:33:08 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.918 03:33:08 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.918 03:33:08 thread -- scripts/common.sh@365 -- # decimal 1 00:06:15.918 03:33:08 thread -- scripts/common.sh@353 -- # local d=1 00:06:15.918 03:33:08 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.918 03:33:08 thread -- scripts/common.sh@355 -- # echo 1 00:06:15.918 03:33:08 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.918 03:33:08 thread -- scripts/common.sh@366 -- # decimal 2 00:06:15.918 03:33:08 thread -- scripts/common.sh@353 -- # local d=2 00:06:15.918 03:33:08 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.918 03:33:08 thread -- scripts/common.sh@355 -- # echo 2 00:06:15.918 03:33:08 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.918 03:33:08 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.918 03:33:08 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.918 03:33:08 thread -- scripts/common.sh@368 -- # return 0 00:06:15.918 03:33:08 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.918 03:33:08 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:15.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.918 --rc genhtml_branch_coverage=1 00:06:15.918 --rc genhtml_function_coverage=1 00:06:15.918 --rc genhtml_legend=1 00:06:15.918 --rc geninfo_all_blocks=1 00:06:15.918 --rc geninfo_unexecuted_blocks=1 00:06:15.918 00:06:15.918 ' 00:06:15.918 03:33:08 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:15.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.918 --rc genhtml_branch_coverage=1 00:06:15.918 --rc genhtml_function_coverage=1 00:06:15.918 --rc genhtml_legend=1 00:06:15.918 --rc geninfo_all_blocks=1 00:06:15.918 --rc geninfo_unexecuted_blocks=1 00:06:15.918 00:06:15.918 ' 00:06:15.918 03:33:08 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:15.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.918 --rc genhtml_branch_coverage=1 00:06:15.918 --rc genhtml_function_coverage=1 00:06:15.918 --rc genhtml_legend=1 00:06:15.918 --rc geninfo_all_blocks=1 00:06:15.918 --rc geninfo_unexecuted_blocks=1 00:06:15.918 00:06:15.918 ' 00:06:15.918 03:33:08 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:15.918 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.919 --rc genhtml_branch_coverage=1 00:06:15.919 --rc genhtml_function_coverage=1 00:06:15.919 --rc genhtml_legend=1 00:06:15.919 --rc geninfo_all_blocks=1 00:06:15.919 --rc geninfo_unexecuted_blocks=1 00:06:15.919 00:06:15.919 ' 00:06:15.919 03:33:08 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.919 03:33:08 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:15.919 03:33:08 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:15.919 03:33:08 thread -- common/autotest_common.sh@10 -- # set +x 00:06:15.919 ************************************ 00:06:15.919 START TEST thread_poller_perf 00:06:15.919 ************************************ 00:06:15.919 03:33:08 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:15.919 [2024-10-01 03:33:08.351181] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:15.919 [2024-10-01 03:33:08.351387] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59967 ] 00:06:16.179 [2024-10-01 03:33:08.497233] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.179 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:16.179 [2024-10-01 03:33:08.671518] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.557 ====================================== 00:06:17.557 busy:2612443886 (cyc) 00:06:17.557 total_run_count: 306000 00:06:17.557 tsc_hz: 2600000000 (cyc) 00:06:17.557 ====================================== 00:06:17.557 poller_cost: 8537 (cyc), 3283 (nsec) 00:06:17.557 00:06:17.557 real 0m1.628s 00:06:17.557 user 0m1.434s 00:06:17.557 sys 0m0.085s 00:06:17.557 03:33:09 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.557 ************************************ 00:06:17.557 END TEST thread_poller_perf 00:06:17.557 ************************************ 00:06:17.557 03:33:09 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:17.557 03:33:09 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.557 03:33:09 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:06:17.557 03:33:09 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.557 03:33:09 thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.557 ************************************ 00:06:17.557 START TEST thread_poller_perf 00:06:17.557 ************************************ 00:06:17.557 03:33:09 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:17.557 [2024-10-01 03:33:10.023866] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:17.557 [2024-10-01 03:33:10.024051] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60003 ] 00:06:17.817 [2024-10-01 03:33:10.169318] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.817 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:17.817 [2024-10-01 03:33:10.349694] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.196 ====================================== 00:06:19.197 busy:2603312470 (cyc) 00:06:19.197 total_run_count: 3972000 00:06:19.197 tsc_hz: 2600000000 (cyc) 00:06:19.197 ====================================== 00:06:19.197 poller_cost: 655 (cyc), 251 (nsec) 00:06:19.197 00:06:19.197 real 0m1.614s 00:06:19.197 user 0m1.441s 00:06:19.197 sys 0m0.065s 00:06:19.197 ************************************ 00:06:19.197 END TEST thread_poller_perf 00:06:19.197 ************************************ 00:06:19.197 03:33:11 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.197 03:33:11 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.197 03:33:11 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:19.197 ************************************ 00:06:19.197 END TEST thread 00:06:19.197 ************************************ 00:06:19.197 00:06:19.197 real 0m3.485s 00:06:19.197 user 0m2.984s 00:06:19.197 sys 0m0.277s 00:06:19.197 03:33:11 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.197 03:33:11 thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.197 03:33:11 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:19.197 03:33:11 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:19.197 03:33:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.197 03:33:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.197 03:33:11 -- common/autotest_common.sh@10 -- # set +x 00:06:19.197 ************************************ 00:06:19.197 START TEST app_cmdline 00:06:19.197 ************************************ 00:06:19.197 03:33:11 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:06:19.456 * Looking for test storage... 00:06:19.456 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:19.456 03:33:11 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:19.456 03:33:11 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:06:19.456 03:33:11 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:19.456 03:33:11 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:19.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:19.456 03:33:11 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:19.456 03:33:11 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:19.456 03:33:11 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:19.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.456 --rc genhtml_branch_coverage=1 00:06:19.456 --rc genhtml_function_coverage=1 00:06:19.456 --rc genhtml_legend=1 00:06:19.456 --rc geninfo_all_blocks=1 00:06:19.456 --rc geninfo_unexecuted_blocks=1 00:06:19.456 00:06:19.456 ' 00:06:19.456 03:33:11 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:19.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.456 --rc genhtml_branch_coverage=1 00:06:19.456 --rc genhtml_function_coverage=1 00:06:19.456 --rc genhtml_legend=1 00:06:19.456 --rc geninfo_all_blocks=1 00:06:19.456 --rc geninfo_unexecuted_blocks=1 00:06:19.456 00:06:19.456 ' 00:06:19.456 03:33:11 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:19.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.456 --rc genhtml_branch_coverage=1 00:06:19.456 --rc genhtml_function_coverage=1 00:06:19.456 --rc genhtml_legend=1 00:06:19.456 --rc geninfo_all_blocks=1 00:06:19.456 --rc geninfo_unexecuted_blocks=1 00:06:19.456 00:06:19.456 ' 00:06:19.456 03:33:11 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:19.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:19.456 --rc genhtml_branch_coverage=1 00:06:19.456 --rc genhtml_function_coverage=1 00:06:19.456 --rc genhtml_legend=1 00:06:19.456 --rc geninfo_all_blocks=1 00:06:19.456 --rc geninfo_unexecuted_blocks=1 00:06:19.456 00:06:19.456 ' 00:06:19.456 03:33:11 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:19.457 03:33:11 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60091 00:06:19.457 03:33:11 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60091 00:06:19.457 03:33:11 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 60091 ']' 00:06:19.457 03:33:11 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:19.457 03:33:11 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.457 03:33:11 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.457 03:33:11 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.457 03:33:11 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.457 03:33:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:19.457 [2024-10-01 03:33:11.902109] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:19.457 [2024-10-01 03:33:11.902227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60091 ] 00:06:19.715 [2024-10-01 03:33:12.049656] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.715 [2024-10-01 03:33:12.227574] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.280 03:33:12 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.280 03:33:12 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:06:20.280 03:33:12 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:06:20.540 { 00:06:20.540 "version": "SPDK v25.01-pre git sha1 09cc66129", 00:06:20.540 "fields": { 00:06:20.540 "major": 25, 00:06:20.540 "minor": 1, 00:06:20.540 "patch": 0, 00:06:20.540 "suffix": "-pre", 00:06:20.540 "commit": "09cc66129" 00:06:20.540 } 00:06:20.540 } 00:06:20.540 03:33:12 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:20.540 03:33:12 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:20.540 03:33:12 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:20.540 03:33:12 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:20.540 03:33:12 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:20.540 03:33:12 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:20.540 03:33:12 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:20.540 03:33:12 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:20.540 03:33:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:20.540 03:33:12 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:20.540 03:33:13 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:20.540 03:33:13 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:20.540 03:33:13 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.540 03:33:13 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:06:20.540 03:33:13 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.540 03:33:13 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.541 03:33:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.541 03:33:13 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.541 03:33:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.541 03:33:13 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.541 03:33:13 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:20.541 03:33:13 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:06:20.541 03:33:13 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:06:20.541 03:33:13 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:20.801 request: 00:06:20.801 { 00:06:20.801 "method": "env_dpdk_get_mem_stats", 00:06:20.801 "req_id": 1 00:06:20.801 } 00:06:20.801 Got JSON-RPC error response 00:06:20.801 response: 00:06:20.801 { 00:06:20.801 "code": -32601, 00:06:20.801 "message": "Method not found" 00:06:20.801 } 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:20.801 03:33:13 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60091 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 60091 ']' 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 60091 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60091 00:06:20.801 killing process with pid 60091 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60091' 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@969 -- # kill 60091 00:06:20.801 03:33:13 app_cmdline -- common/autotest_common.sh@974 -- # wait 60091 00:06:22.712 ************************************ 00:06:22.712 END TEST app_cmdline 00:06:22.712 ************************************ 00:06:22.712 00:06:22.712 real 0m3.175s 00:06:22.712 user 0m3.437s 00:06:22.712 sys 0m0.418s 00:06:22.712 03:33:14 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.712 03:33:14 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:22.712 03:33:14 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:22.712 03:33:14 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:22.712 03:33:14 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.712 03:33:14 -- common/autotest_common.sh@10 -- # set +x 00:06:22.712 ************************************ 00:06:22.712 START TEST version 00:06:22.712 ************************************ 00:06:22.712 03:33:14 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:06:22.712 * Looking for test storage... 00:06:22.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:06:22.712 03:33:14 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.712 03:33:14 version -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.712 03:33:14 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.712 03:33:15 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.712 03:33:15 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.712 03:33:15 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.712 03:33:15 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.712 03:33:15 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.712 03:33:15 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.712 03:33:15 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.712 03:33:15 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.712 03:33:15 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.712 03:33:15 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.712 03:33:15 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.712 03:33:15 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.712 03:33:15 version -- scripts/common.sh@344 -- # case "$op" in 00:06:22.712 03:33:15 version -- scripts/common.sh@345 -- # : 1 00:06:22.712 03:33:15 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.712 03:33:15 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.712 03:33:15 version -- scripts/common.sh@365 -- # decimal 1 00:06:22.712 03:33:15 version -- scripts/common.sh@353 -- # local d=1 00:06:22.712 03:33:15 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.712 03:33:15 version -- scripts/common.sh@355 -- # echo 1 00:06:22.712 03:33:15 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.712 03:33:15 version -- scripts/common.sh@366 -- # decimal 2 00:06:22.712 03:33:15 version -- scripts/common.sh@353 -- # local d=2 00:06:22.712 03:33:15 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.712 03:33:15 version -- scripts/common.sh@355 -- # echo 2 00:06:22.712 03:33:15 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.712 03:33:15 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.712 03:33:15 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.712 03:33:15 version -- scripts/common.sh@368 -- # return 0 00:06:22.712 03:33:15 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.712 03:33:15 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.712 --rc genhtml_branch_coverage=1 00:06:22.712 --rc genhtml_function_coverage=1 00:06:22.712 --rc genhtml_legend=1 00:06:22.712 --rc geninfo_all_blocks=1 00:06:22.712 --rc geninfo_unexecuted_blocks=1 00:06:22.712 00:06:22.712 ' 00:06:22.712 03:33:15 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.712 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.712 --rc genhtml_branch_coverage=1 00:06:22.712 --rc genhtml_function_coverage=1 00:06:22.712 --rc genhtml_legend=1 00:06:22.713 --rc geninfo_all_blocks=1 00:06:22.713 --rc geninfo_unexecuted_blocks=1 00:06:22.713 00:06:22.713 ' 00:06:22.713 03:33:15 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.713 --rc genhtml_branch_coverage=1 00:06:22.713 --rc genhtml_function_coverage=1 00:06:22.713 --rc genhtml_legend=1 00:06:22.713 --rc geninfo_all_blocks=1 00:06:22.713 --rc geninfo_unexecuted_blocks=1 00:06:22.713 00:06:22.713 ' 00:06:22.713 03:33:15 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.713 --rc genhtml_branch_coverage=1 00:06:22.713 --rc genhtml_function_coverage=1 00:06:22.713 --rc genhtml_legend=1 00:06:22.713 --rc geninfo_all_blocks=1 00:06:22.713 --rc geninfo_unexecuted_blocks=1 00:06:22.713 00:06:22.713 ' 00:06:22.713 03:33:15 version -- app/version.sh@17 -- # get_header_version major 00:06:22.713 03:33:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:22.713 03:33:15 version -- app/version.sh@14 -- # cut -f2 00:06:22.713 03:33:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.713 03:33:15 version -- app/version.sh@17 -- # major=25 00:06:22.713 03:33:15 version -- app/version.sh@18 -- # get_header_version minor 00:06:22.713 03:33:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:22.713 03:33:15 version -- app/version.sh@14 -- # cut -f2 00:06:22.713 03:33:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.713 03:33:15 version -- app/version.sh@18 -- # minor=1 00:06:22.713 03:33:15 version -- app/version.sh@19 -- # get_header_version patch 00:06:22.713 03:33:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.713 03:33:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:22.713 03:33:15 version -- app/version.sh@14 -- # cut -f2 00:06:22.713 03:33:15 version -- app/version.sh@19 -- # patch=0 00:06:22.713 03:33:15 version -- app/version.sh@20 -- # get_header_version suffix 00:06:22.713 03:33:15 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:06:22.713 03:33:15 version -- app/version.sh@14 -- # cut -f2 00:06:22.713 03:33:15 version -- app/version.sh@14 -- # tr -d '"' 00:06:22.713 03:33:15 version -- app/version.sh@20 -- # suffix=-pre 00:06:22.713 03:33:15 version -- app/version.sh@22 -- # version=25.1 00:06:22.713 03:33:15 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:22.713 03:33:15 version -- app/version.sh@28 -- # version=25.1rc0 00:06:22.713 03:33:15 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:06:22.713 03:33:15 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:22.713 03:33:15 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:22.713 03:33:15 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:22.713 00:06:22.713 real 0m0.206s 00:06:22.713 user 0m0.121s 00:06:22.713 sys 0m0.112s 00:06:22.713 03:33:15 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.713 03:33:15 version -- common/autotest_common.sh@10 -- # set +x 00:06:22.713 ************************************ 00:06:22.713 END TEST version 00:06:22.713 ************************************ 00:06:22.713 03:33:15 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:22.713 03:33:15 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:22.713 03:33:15 -- spdk/autotest.sh@194 -- # uname -s 00:06:22.713 03:33:15 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:22.713 03:33:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:22.713 03:33:15 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:22.713 03:33:15 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:06:22.713 03:33:15 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:22.713 03:33:15 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:22.713 03:33:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:22.713 03:33:15 -- common/autotest_common.sh@10 -- # set +x 00:06:22.713 ************************************ 00:06:22.713 START TEST blockdev_nvme 00:06:22.713 ************************************ 00:06:22.713 03:33:15 blockdev_nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:06:22.713 * Looking for test storage... 00:06:22.974 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:22.974 03:33:15 blockdev_nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:22.974 03:33:15 blockdev_nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:06:22.974 03:33:15 blockdev_nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:22.974 03:33:15 blockdev_nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.974 03:33:15 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.975 03:33:15 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:06:22.975 03:33:15 blockdev_nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.975 03:33:15 blockdev_nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.975 --rc genhtml_branch_coverage=1 00:06:22.975 --rc genhtml_function_coverage=1 00:06:22.975 --rc genhtml_legend=1 00:06:22.975 --rc geninfo_all_blocks=1 00:06:22.975 --rc geninfo_unexecuted_blocks=1 00:06:22.975 00:06:22.975 ' 00:06:22.975 03:33:15 blockdev_nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.975 --rc genhtml_branch_coverage=1 00:06:22.975 --rc genhtml_function_coverage=1 00:06:22.975 --rc genhtml_legend=1 00:06:22.975 --rc geninfo_all_blocks=1 00:06:22.975 --rc geninfo_unexecuted_blocks=1 00:06:22.975 00:06:22.975 ' 00:06:22.975 03:33:15 blockdev_nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.975 --rc genhtml_branch_coverage=1 00:06:22.975 --rc genhtml_function_coverage=1 00:06:22.975 --rc genhtml_legend=1 00:06:22.975 --rc geninfo_all_blocks=1 00:06:22.975 --rc geninfo_unexecuted_blocks=1 00:06:22.975 00:06:22.975 ' 00:06:22.975 03:33:15 blockdev_nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:22.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.975 --rc genhtml_branch_coverage=1 00:06:22.975 --rc genhtml_function_coverage=1 00:06:22.975 --rc genhtml_legend=1 00:06:22.975 --rc geninfo_all_blocks=1 00:06:22.975 --rc geninfo_unexecuted_blocks=1 00:06:22.975 00:06:22.975 ' 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:22.975 03:33:15 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60264 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60264 00:06:22.975 03:33:15 blockdev_nvme -- common/autotest_common.sh@831 -- # '[' -z 60264 ']' 00:06:22.975 03:33:15 blockdev_nvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.975 03:33:15 blockdev_nvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:22.975 03:33:15 blockdev_nvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.975 03:33:15 blockdev_nvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:22.975 03:33:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:22.975 03:33:15 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:22.975 [2024-10-01 03:33:15.433850] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:22.975 [2024-10-01 03:33:15.433980] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60264 ] 00:06:23.236 [2024-10-01 03:33:15.583360] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.236 [2024-10-01 03:33:15.770976] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@864 -- # return 0 00:06:24.180 03:33:16 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:06:24.180 03:33:16 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:06:24.180 03:33:16 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:06:24.180 03:33:16 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:24.180 03:33:16 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:24.180 03:33:16 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.180 03:33:16 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.180 03:33:16 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:06:24.180 03:33:16 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.180 03:33:16 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.180 03:33:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.442 03:33:16 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.442 03:33:16 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:24.442 03:33:16 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.442 03:33:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.442 03:33:16 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.442 03:33:16 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:06:24.442 03:33:16 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:06:24.442 03:33:16 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.442 03:33:16 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:06:24.442 03:33:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.442 03:33:16 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.442 03:33:16 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:06:24.442 03:33:16 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:06:24.443 03:33:16 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "fd65d222-d317-43c4-a8d9-c8ed02dc8816"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "fd65d222-d317-43c4-a8d9-c8ed02dc8816",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "a2ea9276-089f-49a7-857a-8db15cd042cd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "a2ea9276-089f-49a7-857a-8db15cd042cd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "91dcd61d-f24c-4cd1-9366-8b4918c91397"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "91dcd61d-f24c-4cd1-9366-8b4918c91397",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "be0f3d4b-807f-4924-be9a-0cfdb05027a6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "be0f3d4b-807f-4924-be9a-0cfdb05027a6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "dec4ce64-ad47-473c-ae2f-c6cc7c117984"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dec4ce64-ad47-473c-ae2f-c6cc7c117984",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "771f83d9-0888-4ec0-ad8f-9ee392915166"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "771f83d9-0888-4ec0-ad8f-9ee392915166",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:24.443 03:33:16 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:06:24.443 03:33:16 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:06:24.443 03:33:16 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:06:24.443 03:33:16 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60264 00:06:24.443 03:33:16 blockdev_nvme -- common/autotest_common.sh@950 -- # '[' -z 60264 ']' 00:06:24.443 03:33:16 blockdev_nvme -- common/autotest_common.sh@954 -- # kill -0 60264 00:06:24.443 03:33:16 blockdev_nvme -- common/autotest_common.sh@955 -- # uname 00:06:24.443 03:33:16 blockdev_nvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.443 03:33:16 blockdev_nvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60264 00:06:24.443 killing process with pid 60264 00:06:24.443 03:33:16 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.443 03:33:16 blockdev_nvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.443 03:33:16 blockdev_nvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60264' 00:06:24.443 03:33:16 blockdev_nvme -- common/autotest_common.sh@969 -- # kill 60264 00:06:24.443 03:33:16 blockdev_nvme -- common/autotest_common.sh@974 -- # wait 60264 00:06:26.354 03:33:18 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:26.354 03:33:18 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:26.354 03:33:18 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:06:26.354 03:33:18 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.354 03:33:18 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:26.354 ************************************ 00:06:26.354 START TEST bdev_hello_world 00:06:26.354 ************************************ 00:06:26.354 03:33:18 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:26.354 [2024-10-01 03:33:18.577992] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:26.354 [2024-10-01 03:33:18.578152] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60354 ] 00:06:26.354 [2024-10-01 03:33:18.730802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.616 [2024-10-01 03:33:18.966103] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.190 [2024-10-01 03:33:19.554332] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:27.190 [2024-10-01 03:33:19.554394] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:27.190 [2024-10-01 03:33:19.554421] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:27.190 [2024-10-01 03:33:19.557181] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:27.190 [2024-10-01 03:33:19.558273] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:27.190 [2024-10-01 03:33:19.558315] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:27.190 [2024-10-01 03:33:19.559245] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:27.190 00:06:27.190 [2024-10-01 03:33:19.559329] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:28.134 00:06:28.134 real 0m1.999s 00:06:28.134 user 0m1.630s 00:06:28.134 sys 0m0.254s 00:06:28.134 03:33:20 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:28.134 03:33:20 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:28.134 ************************************ 00:06:28.134 END TEST bdev_hello_world 00:06:28.134 ************************************ 00:06:28.134 03:33:20 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:06:28.134 03:33:20 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:28.134 03:33:20 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:28.134 03:33:20 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:28.134 ************************************ 00:06:28.134 START TEST bdev_bounds 00:06:28.134 ************************************ 00:06:28.134 Process bdevio pid: 60396 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=60396 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 60396' 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 60396 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 60396 ']' 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.134 03:33:20 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:28.134 [2024-10-01 03:33:20.644723] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:28.134 [2024-10-01 03:33:20.645080] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60396 ] 00:06:28.437 [2024-10-01 03:33:20.799220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:28.725 [2024-10-01 03:33:21.032999] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.725 [2024-10-01 03:33:21.033245] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.725 [2024-10-01 03:33:21.033374] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.296 03:33:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:29.296 03:33:21 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:06:29.296 03:33:21 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:29.296 I/O targets: 00:06:29.296 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:29.296 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:06:29.296 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:29.296 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:29.296 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:29.296 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:29.296 00:06:29.296 00:06:29.296 CUnit - A unit testing framework for C - Version 2.1-3 00:06:29.296 http://cunit.sourceforge.net/ 00:06:29.296 00:06:29.296 00:06:29.296 Suite: bdevio tests on: Nvme3n1 00:06:29.296 Test: blockdev write read block ...passed 00:06:29.296 Test: blockdev write zeroes read block ...passed 00:06:29.296 Test: blockdev write zeroes read no split ...passed 00:06:29.296 Test: blockdev write zeroes read split ...passed 00:06:29.296 Test: blockdev write zeroes read split partial ...passed 00:06:29.296 Test: blockdev reset ...[2024-10-01 03:33:21.814700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:06:29.296 passed 00:06:29.296 Test: blockdev write read 8 blocks ...[2024-10-01 03:33:21.818157] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:29.296 passed 00:06:29.296 Test: blockdev write read size > 128k ...passed 00:06:29.296 Test: blockdev write read invalid size ...passed 00:06:29.296 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.296 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.296 Test: blockdev write read max offset ...passed 00:06:29.296 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.296 Test: blockdev writev readv 8 blocks ...passed 00:06:29.296 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.296 Test: blockdev writev readv block ...passed 00:06:29.296 Test: blockdev writev readv size > 128k ...passed 00:06:29.296 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.296 Test: blockdev comparev and writev ...[2024-10-01 03:33:21.835849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b8a0a000 len:0x1000 00:06:29.296 [2024-10-01 03:33:21.835911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:29.296 passed 00:06:29.296 Test: blockdev nvme passthru rw ...passed 00:06:29.296 Test: blockdev nvme passthru vendor specific ...passed 00:06:29.296 Test: blockdev nvme admin passthru ...[2024-10-01 03:33:21.838369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:29.296 [2024-10-01 03:33:21.838416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:29.557 passed 00:06:29.557 Test: blockdev copy ...passed 00:06:29.557 Suite: bdevio tests on: Nvme2n3 00:06:29.557 Test: blockdev write read block ...passed 00:06:29.557 Test: blockdev write zeroes read block ...passed 00:06:29.557 Test: blockdev write zeroes read no split ...passed 00:06:29.557 Test: blockdev write zeroes read split ...passed 00:06:29.557 Test: blockdev write zeroes read split partial ...passed 00:06:29.557 Test: blockdev reset ...[2024-10-01 03:33:21.902850] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:29.557 [2024-10-01 03:33:21.908264] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:29.557 passed 00:06:29.557 Test: blockdev write read 8 blocks ...passed 00:06:29.557 Test: blockdev write read size > 128k ...passed 00:06:29.557 Test: blockdev write read invalid size ...passed 00:06:29.557 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.557 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.557 Test: blockdev write read max offset ...passed 00:06:29.557 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.557 Test: blockdev writev readv 8 blocks ...passed 00:06:29.557 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.557 Test: blockdev writev readv block ...passed 00:06:29.557 Test: blockdev writev readv size > 128k ...passed 00:06:29.557 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.557 Test: blockdev comparev and writev ...[2024-10-01 03:33:21.928663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ae004000 len:0x1000 00:06:29.557 [2024-10-01 03:33:21.928846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:29.557 passed 00:06:29.557 Test: blockdev nvme passthru rw ...passed 00:06:29.557 Test: blockdev nvme passthru vendor specific ...passed 00:06:29.557 Test: blockdev nvme admin passthru ...[2024-10-01 03:33:21.931075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:29.557 [2024-10-01 03:33:21.931130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:29.557 passed 00:06:29.557 Test: blockdev copy ...passed 00:06:29.557 Suite: bdevio tests on: Nvme2n2 00:06:29.557 Test: blockdev write read block ...passed 00:06:29.557 Test: blockdev write zeroes read block ...passed 00:06:29.557 Test: blockdev write zeroes read no split ...passed 00:06:29.557 Test: blockdev write zeroes read split ...passed 00:06:29.557 Test: blockdev write zeroes read split partial ...passed 00:06:29.557 Test: blockdev reset ...[2024-10-01 03:33:21.991018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:29.557 [2024-10-01 03:33:21.996878] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:29.557 passed 00:06:29.557 Test: blockdev write read 8 blocks ...passed 00:06:29.557 Test: blockdev write read size > 128k ...passed 00:06:29.557 Test: blockdev write read invalid size ...passed 00:06:29.557 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.557 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.557 Test: blockdev write read max offset ...passed 00:06:29.557 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.557 Test: blockdev writev readv 8 blocks ...passed 00:06:29.557 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.557 Test: blockdev writev readv block ...passed 00:06:29.557 Test: blockdev writev readv size > 128k ...passed 00:06:29.557 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.557 Test: blockdev comparev and writev ...[2024-10-01 03:33:22.017009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5c3a000 len:0x1000 00:06:29.557 [2024-10-01 03:33:22.017063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:29.557 passed 00:06:29.557 Test: blockdev nvme passthru rw ...passed 00:06:29.557 Test: blockdev nvme passthru vendor specific ...[2024-10-01 03:33:22.018826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:29.557 [2024-10-01 03:33:22.018864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:29.557 passed 00:06:29.557 Test: blockdev nvme admin passthru ...passed 00:06:29.557 Test: blockdev copy ...passed 00:06:29.557 Suite: bdevio tests on: Nvme2n1 00:06:29.557 Test: blockdev write read block ...passed 00:06:29.557 Test: blockdev write zeroes read block ...passed 00:06:29.557 Test: blockdev write zeroes read no split ...passed 00:06:29.557 Test: blockdev write zeroes read split ...passed 00:06:29.557 Test: blockdev write zeroes read split partial ...passed 00:06:29.557 Test: blockdev reset ...[2024-10-01 03:33:22.080419] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:06:29.557 [2024-10-01 03:33:22.087224] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:29.557 passed 00:06:29.557 Test: blockdev write read 8 blocks ...passed 00:06:29.557 Test: blockdev write read size > 128k ...passed 00:06:29.557 Test: blockdev write read invalid size ...passed 00:06:29.557 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.557 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.557 Test: blockdev write read max offset ...passed 00:06:29.557 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.557 Test: blockdev writev readv 8 blocks ...passed 00:06:29.557 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.557 Test: blockdev writev readv block ...passed 00:06:29.557 Test: blockdev writev readv size > 128k ...passed 00:06:29.818 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.819 Test: blockdev comparev and writev ...[2024-10-01 03:33:22.109640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5c34000 len:0x1000 00:06:29.819 [2024-10-01 03:33:22.109699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:29.819 passed 00:06:29.819 Test: blockdev nvme passthru rw ...passed 00:06:29.819 Test: blockdev nvme passthru vendor specific ...passed 00:06:29.819 Test: blockdev nvme admin passthru ...[2024-10-01 03:33:22.112425] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:29.819 [2024-10-01 03:33:22.112472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:29.819 passed 00:06:29.819 Test: blockdev copy ...passed 00:06:29.819 Suite: bdevio tests on: Nvme1n1 00:06:29.819 Test: blockdev write read block ...passed 00:06:29.819 Test: blockdev write zeroes read block ...passed 00:06:29.819 Test: blockdev write zeroes read no split ...passed 00:06:29.819 Test: blockdev write zeroes read split ...passed 00:06:29.819 Test: blockdev write zeroes read split partial ...passed 00:06:29.819 Test: blockdev reset ...[2024-10-01 03:33:22.181177] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:06:29.819 [2024-10-01 03:33:22.184850] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:29.819 passed 00:06:29.819 Test: blockdev write read 8 blocks ...passed 00:06:29.819 Test: blockdev write read size > 128k ...passed 00:06:29.819 Test: blockdev write read invalid size ...passed 00:06:29.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.819 Test: blockdev write read max offset ...passed 00:06:29.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.819 Test: blockdev writev readv 8 blocks ...passed 00:06:29.819 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.819 Test: blockdev writev readv block ...passed 00:06:29.819 Test: blockdev writev readv size > 128k ...passed 00:06:29.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.819 Test: blockdev comparev and writev ...[2024-10-01 03:33:22.206299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c5c30000 len:0x1000 00:06:29.819 [2024-10-01 03:33:22.206358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:29.819 passed 00:06:29.819 Test: blockdev nvme passthru rw ...passed 00:06:29.819 Test: blockdev nvme passthru vendor specific ...passed 00:06:29.819 Test: blockdev nvme admin passthru ...[2024-10-01 03:33:22.209208] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:29.819 [2024-10-01 03:33:22.209260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:29.819 passed 00:06:29.819 Test: blockdev copy ...passed 00:06:29.819 Suite: bdevio tests on: Nvme0n1 00:06:29.819 Test: blockdev write read block ...passed 00:06:29.819 Test: blockdev write zeroes read block ...passed 00:06:29.819 Test: blockdev write zeroes read no split ...passed 00:06:29.819 Test: blockdev write zeroes read split ...passed 00:06:29.819 Test: blockdev write zeroes read split partial ...passed 00:06:29.819 Test: blockdev reset ...[2024-10-01 03:33:22.271823] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:06:29.819 [2024-10-01 03:33:22.277291] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:06:29.819 passed 00:06:29.819 Test: blockdev write read 8 blocks ...passed 00:06:29.819 Test: blockdev write read size > 128k ...passed 00:06:29.819 Test: blockdev write read invalid size ...passed 00:06:29.819 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:29.819 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:29.819 Test: blockdev write read max offset ...passed 00:06:29.819 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:29.819 Test: blockdev writev readv 8 blocks ...passed 00:06:29.819 Test: blockdev writev readv 30 x 1block ...passed 00:06:29.819 Test: blockdev writev readv block ...passed 00:06:29.819 Test: blockdev writev readv size > 128k ...passed 00:06:29.819 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:29.819 Test: blockdev comparev and writev ...passed 00:06:29.819 Test: blockdev nvme passthru rw ...[2024-10-01 03:33:22.295489] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:29.819 separate metadata which is not supported yet. 00:06:29.819 passed 00:06:29.819 Test: blockdev nvme passthru vendor specific ...passed 00:06:29.819 Test: blockdev nvme admin passthru ...[2024-10-01 03:33:22.296970] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:29.819 [2024-10-01 03:33:22.297047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:29.819 passed 00:06:29.819 Test: blockdev copy ...passed 00:06:29.819 00:06:29.819 Run Summary: Type Total Ran Passed Failed Inactive 00:06:29.819 suites 6 6 n/a 0 0 00:06:29.819 tests 138 138 138 0 0 00:06:29.819 asserts 893 893 893 0 n/a 00:06:29.819 00:06:29.819 Elapsed time = 1.371 seconds 00:06:29.819 0 00:06:29.819 03:33:22 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 60396 00:06:29.819 03:33:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 60396 ']' 00:06:29.819 03:33:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 60396 00:06:29.819 03:33:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:06:29.819 03:33:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:29.819 03:33:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60396 00:06:29.819 03:33:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:29.819 03:33:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:29.819 killing process with pid 60396 00:06:29.819 03:33:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60396' 00:06:29.819 03:33:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 60396 00:06:29.819 03:33:22 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 60396 00:06:30.761 03:33:23 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:30.761 00:06:30.761 real 0m2.564s 00:06:30.761 user 0m6.153s 00:06:30.761 sys 0m0.373s 00:06:30.761 03:33:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:30.761 ************************************ 00:06:30.761 END TEST bdev_bounds 00:06:30.761 ************************************ 00:06:30.761 03:33:23 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:30.761 03:33:23 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:30.761 03:33:23 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:06:30.761 03:33:23 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:30.761 03:33:23 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:30.761 ************************************ 00:06:30.761 START TEST bdev_nbd 00:06:30.761 ************************************ 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:06:30.761 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:30.762 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=60450 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 60450 /var/tmp/spdk-nbd.sock 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 60450 ']' 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:30.762 03:33:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:30.762 [2024-10-01 03:33:23.277033] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:30.762 [2024-10-01 03:33:23.277346] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:31.023 [2024-10-01 03:33:23.425262] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.284 [2024-10-01 03:33:23.609211] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:31.857 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.117 1+0 records in 00:06:32.117 1+0 records out 00:06:32.117 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106635 s, 3.8 MB/s 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.117 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.118 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.118 1+0 records in 00:06:32.118 1+0 records out 00:06:32.118 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000963875 s, 4.2 MB/s 00:06:32.118 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.379 1+0 records in 00:06:32.379 1+0 records out 00:06:32.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000932958 s, 4.4 MB/s 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:32.379 03:33:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.641 1+0 records in 00:06:32.641 1+0 records out 00:06:32.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00116468 s, 3.5 MB/s 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:32.641 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:32.902 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:32.902 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:32.902 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:32.902 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:06:32.902 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:32.902 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.902 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.902 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:06:32.902 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:32.902 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.903 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.903 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:32.903 1+0 records in 00:06:32.903 1+0 records out 00:06:32.903 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000751936 s, 5.4 MB/s 00:06:32.903 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.903 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:32.903 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:32.903 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.903 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:32.903 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:32.903 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:32.903 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:33.164 1+0 records in 00:06:33.164 1+0 records out 00:06:33.164 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00045526 s, 9.0 MB/s 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:06:33.164 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.424 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd0", 00:06:33.424 "bdev_name": "Nvme0n1" 00:06:33.424 }, 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd1", 00:06:33.424 "bdev_name": "Nvme1n1" 00:06:33.424 }, 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd2", 00:06:33.424 "bdev_name": "Nvme2n1" 00:06:33.424 }, 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd3", 00:06:33.424 "bdev_name": "Nvme2n2" 00:06:33.424 }, 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd4", 00:06:33.424 "bdev_name": "Nvme2n3" 00:06:33.424 }, 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd5", 00:06:33.424 "bdev_name": "Nvme3n1" 00:06:33.424 } 00:06:33.424 ]' 00:06:33.424 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:33.424 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd0", 00:06:33.424 "bdev_name": "Nvme0n1" 00:06:33.424 }, 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd1", 00:06:33.424 "bdev_name": "Nvme1n1" 00:06:33.424 }, 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd2", 00:06:33.424 "bdev_name": "Nvme2n1" 00:06:33.424 }, 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd3", 00:06:33.424 "bdev_name": "Nvme2n2" 00:06:33.424 }, 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd4", 00:06:33.424 "bdev_name": "Nvme2n3" 00:06:33.424 }, 00:06:33.424 { 00:06:33.424 "nbd_device": "/dev/nbd5", 00:06:33.424 "bdev_name": "Nvme3n1" 00:06:33.424 } 00:06:33.424 ]' 00:06:33.424 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:33.424 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:06:33.424 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.424 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:06:33.424 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.424 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:33.424 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.424 03:33:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.684 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.685 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.685 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.685 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.685 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.685 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.685 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.685 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.685 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.685 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:33.945 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:33.945 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:33.945 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:33.945 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.945 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.945 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:33.945 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:33.945 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.945 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.945 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.207 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:34.468 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:34.468 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:34.468 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:34.468 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.468 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.468 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:34.468 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.468 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.468 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:34.468 03:33:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:34.727 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:34.727 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:34.727 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:34.727 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.727 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.727 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:34.727 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:34.727 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.727 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.727 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.727 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:34.987 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:35.246 /dev/nbd0 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.246 1+0 records in 00:06:35.246 1+0 records out 00:06:35.246 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760591 s, 5.4 MB/s 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:35.246 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:06:35.507 /dev/nbd1 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.507 1+0 records in 00:06:35.507 1+0 records out 00:06:35.507 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000758617 s, 5.4 MB/s 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:35.507 03:33:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:06:35.768 /dev/nbd10 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:35.768 1+0 records in 00:06:35.768 1+0 records out 00:06:35.768 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00136598 s, 3.0 MB/s 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:35.768 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:06:35.768 /dev/nbd11 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.028 1+0 records in 00:06:36.028 1+0 records out 00:06:36.028 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00338504 s, 1.2 MB/s 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.028 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:06:36.028 /dev/nbd12 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.288 1+0 records in 00:06:36.288 1+0 records out 00:06:36.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00108471 s, 3.8 MB/s 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:36.288 /dev/nbd13 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:36.288 1+0 records in 00:06:36.288 1+0 records out 00:06:36.288 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000887633 s, 4.6 MB/s 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:06:36.288 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:36.549 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:36.549 03:33:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:06:36.549 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:36.550 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:36.550 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:36.550 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:36.550 03:33:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd0", 00:06:36.550 "bdev_name": "Nvme0n1" 00:06:36.550 }, 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd1", 00:06:36.550 "bdev_name": "Nvme1n1" 00:06:36.550 }, 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd10", 00:06:36.550 "bdev_name": "Nvme2n1" 00:06:36.550 }, 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd11", 00:06:36.550 "bdev_name": "Nvme2n2" 00:06:36.550 }, 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd12", 00:06:36.550 "bdev_name": "Nvme2n3" 00:06:36.550 }, 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd13", 00:06:36.550 "bdev_name": "Nvme3n1" 00:06:36.550 } 00:06:36.550 ]' 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd0", 00:06:36.550 "bdev_name": "Nvme0n1" 00:06:36.550 }, 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd1", 00:06:36.550 "bdev_name": "Nvme1n1" 00:06:36.550 }, 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd10", 00:06:36.550 "bdev_name": "Nvme2n1" 00:06:36.550 }, 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd11", 00:06:36.550 "bdev_name": "Nvme2n2" 00:06:36.550 }, 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd12", 00:06:36.550 "bdev_name": "Nvme2n3" 00:06:36.550 }, 00:06:36.550 { 00:06:36.550 "nbd_device": "/dev/nbd13", 00:06:36.550 "bdev_name": "Nvme3n1" 00:06:36.550 } 00:06:36.550 ]' 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:36.550 /dev/nbd1 00:06:36.550 /dev/nbd10 00:06:36.550 /dev/nbd11 00:06:36.550 /dev/nbd12 00:06:36.550 /dev/nbd13' 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:36.550 /dev/nbd1 00:06:36.550 /dev/nbd10 00:06:36.550 /dev/nbd11 00:06:36.550 /dev/nbd12 00:06:36.550 /dev/nbd13' 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:36.550 256+0 records in 00:06:36.550 256+0 records out 00:06:36.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00621344 s, 169 MB/s 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.550 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:36.811 256+0 records in 00:06:36.811 256+0 records out 00:06:36.811 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.238276 s, 4.4 MB/s 00:06:36.811 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:36.811 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:37.073 256+0 records in 00:06:37.073 256+0 records out 00:06:37.073 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.257885 s, 4.1 MB/s 00:06:37.073 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.073 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:37.335 256+0 records in 00:06:37.335 256+0 records out 00:06:37.335 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.245134 s, 4.3 MB/s 00:06:37.335 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.335 03:33:29 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:37.614 256+0 records in 00:06:37.614 256+0 records out 00:06:37.614 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.259794 s, 4.0 MB/s 00:06:37.614 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.614 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:37.880 256+0 records in 00:06:37.880 256+0 records out 00:06:37.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.264208 s, 4.0 MB/s 00:06:37.880 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:37.880 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:38.141 256+0 records in 00:06:38.141 256+0 records out 00:06:38.141 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.270647 s, 3.9 MB/s 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.141 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.401 03:33:30 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:38.662 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:38.662 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:38.662 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:38.662 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.662 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.662 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:38.662 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.662 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.662 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.662 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:38.924 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:38.924 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:38.924 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:38.924 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:38.924 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:38.924 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:38.924 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:38.924 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:38.924 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:38.924 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:39.185 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:39.185 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:39.185 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:39.185 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.185 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.185 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:39.185 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.185 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.185 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.185 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:39.446 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:39.446 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:39.446 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:39.446 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.447 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.447 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:39.447 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.447 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.447 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:39.447 03:33:31 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:39.709 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:39.709 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:39.709 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:39.709 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:39.709 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:39.709 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:39.709 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:39.709 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:39.709 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.709 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.709 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:39.970 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:40.231 malloc_lvol_verify 00:06:40.231 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:40.231 0eec8ad9-8dc1-4909-9118-86b49b11c7e9 00:06:40.493 03:33:32 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:40.493 b23779af-42f0-4fb7-9a85-3620fc0e465c 00:06:40.493 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:40.757 /dev/nbd0 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:40.757 mke2fs 1.47.0 (5-Feb-2023) 00:06:40.757 Discarding device blocks: 0/4096 done 00:06:40.757 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:40.757 00:06:40.757 Allocating group tables: 0/1 done 00:06:40.757 Writing inode tables: 0/1 done 00:06:40.757 Creating journal (1024 blocks): done 00:06:40.757 Writing superblocks and filesystem accounting information: 0/1 done 00:06:40.757 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.757 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 60450 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 60450 ']' 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 60450 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60450 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:41.019 killing process with pid 60450 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60450' 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 60450 00:06:41.019 03:33:33 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 60450 00:06:41.962 03:33:34 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:41.962 00:06:41.962 real 0m11.284s 00:06:41.962 user 0m15.155s 00:06:41.962 sys 0m3.525s 00:06:41.962 03:33:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:41.962 ************************************ 00:06:41.962 END TEST bdev_nbd 00:06:41.962 ************************************ 00:06:41.962 03:33:34 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:42.223 03:33:34 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:06:42.223 03:33:34 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:06:42.223 skipping fio tests on NVMe due to multi-ns failures. 00:06:42.223 03:33:34 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:42.224 03:33:34 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:42.224 03:33:34 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:42.224 03:33:34 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:42.224 03:33:34 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:42.224 03:33:34 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:42.224 ************************************ 00:06:42.224 START TEST bdev_verify 00:06:42.224 ************************************ 00:06:42.224 03:33:34 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:42.224 [2024-10-01 03:33:34.638233] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:42.224 [2024-10-01 03:33:34.638376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60842 ] 00:06:42.485 [2024-10-01 03:33:34.791850] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.485 [2024-10-01 03:33:34.991472] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.485 [2024-10-01 03:33:34.991596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.058 Running I/O for 5 seconds... 00:06:48.223 19968.00 IOPS, 78.00 MiB/s 19584.00 IOPS, 76.50 MiB/s 19200.00 IOPS, 75.00 MiB/s 18944.00 IOPS, 74.00 MiB/s 19020.80 IOPS, 74.30 MiB/s 00:06:48.223 Latency(us) 00:06:48.223 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:48.223 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.223 Verification LBA range: start 0x0 length 0xbd0bd 00:06:48.223 Nvme0n1 : 5.06 1541.97 6.02 0.00 0.00 82717.99 18652.55 92758.65 00:06:48.223 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.223 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:48.223 Nvme0n1 : 5.06 1568.53 6.13 0.00 0.00 81370.36 19358.33 91952.05 00:06:48.223 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.223 Verification LBA range: start 0x0 length 0xa0000 00:06:48.223 Nvme1n1 : 5.07 1541.51 6.02 0.00 0.00 82487.98 21072.34 76223.41 00:06:48.223 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.223 Verification LBA range: start 0xa0000 length 0xa0000 00:06:48.223 Nvme1n1 : 5.06 1568.13 6.13 0.00 0.00 81274.12 21273.99 89128.96 00:06:48.223 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.223 Verification LBA range: start 0x0 length 0x80000 00:06:48.223 Nvme2n1 : 5.08 1548.55 6.05 0.00 0.00 81773.38 6452.78 63721.16 00:06:48.223 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.223 Verification LBA range: start 0x80000 length 0x80000 00:06:48.223 Nvme2n1 : 5.06 1567.77 6.12 0.00 0.00 80836.66 20971.52 68964.04 00:06:48.223 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.223 Verification LBA range: start 0x0 length 0x80000 00:06:48.223 Nvme2n2 : 5.08 1548.13 6.05 0.00 0.00 81592.58 5999.06 60494.77 00:06:48.223 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.223 Verification LBA range: start 0x80000 length 0x80000 00:06:48.223 Nvme2n2 : 5.08 1576.24 6.16 0.00 0.00 80249.96 4159.02 70980.53 00:06:48.224 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.224 Verification LBA range: start 0x0 length 0x80000 00:06:48.224 Nvme2n3 : 5.10 1556.28 6.08 0.00 0.00 81127.84 11494.01 61301.37 00:06:48.224 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.224 Verification LBA range: start 0x80000 length 0x80000 00:06:48.224 Nvme2n3 : 5.09 1584.78 6.19 0.00 0.00 79767.46 10082.46 71787.13 00:06:48.224 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:48.224 Verification LBA range: start 0x0 length 0x20000 00:06:48.224 Nvme3n1 : 5.10 1555.86 6.08 0.00 0.00 81020.41 11443.59 62511.26 00:06:48.224 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:48.224 Verification LBA range: start 0x20000 length 0x20000 00:06:48.224 Nvme3n1 : 5.09 1584.44 6.19 0.00 0.00 79653.78 9628.75 72593.72 00:06:48.224 =================================================================================================================== 00:06:48.224 Total : 18742.18 73.21 0.00 0.00 81146.94 4159.02 92758.65 00:06:49.611 00:06:49.611 real 0m7.424s 00:06:49.611 user 0m13.551s 00:06:49.611 sys 0m0.265s 00:06:49.611 03:33:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.611 03:33:41 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:49.611 ************************************ 00:06:49.611 END TEST bdev_verify 00:06:49.611 ************************************ 00:06:49.611 03:33:42 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:49.611 03:33:42 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:06:49.611 03:33:42 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.611 03:33:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:49.611 ************************************ 00:06:49.611 START TEST bdev_verify_big_io 00:06:49.611 ************************************ 00:06:49.611 03:33:42 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:49.611 [2024-10-01 03:33:42.119939] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:49.611 [2024-10-01 03:33:42.120077] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60942 ] 00:06:49.872 [2024-10-01 03:33:42.271300] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.133 [2024-10-01 03:33:42.457782] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.133 [2024-10-01 03:33:42.457835] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.706 Running I/O for 5 seconds... 00:06:57.169 1413.00 IOPS, 88.31 MiB/s 2028.50 IOPS, 126.78 MiB/s 1790.00 IOPS, 111.88 MiB/s 1886.75 IOPS, 117.92 MiB/s 00:06:57.169 Latency(us) 00:06:57.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.169 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0x0 length 0xbd0b 00:06:57.169 Nvme0n1 : 5.74 80.87 5.05 0.00 0.00 1507134.61 14317.10 1651910.50 00:06:57.169 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:57.169 Nvme0n1 : 5.64 158.18 9.89 0.00 0.00 790811.48 19559.98 806596.92 00:06:57.169 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0x0 length 0xa000 00:06:57.169 Nvme1n1 : 5.80 78.79 4.92 0.00 0.00 1466787.68 60494.77 2490771.30 00:06:57.169 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0xa000 length 0xa000 00:06:57.169 Nvme1n1 : 5.64 156.19 9.76 0.00 0.00 776854.00 53235.40 735616.39 00:06:57.169 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0x0 length 0x8000 00:06:57.169 Nvme2n1 : 5.89 89.77 5.61 0.00 0.00 1222658.37 23895.43 2555299.05 00:06:57.169 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0x8000 length 0x8000 00:06:57.169 Nvme2n1 : 5.64 158.79 9.92 0.00 0.00 749940.92 85499.27 713031.68 00:06:57.169 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0x0 length 0x8000 00:06:57.169 Nvme2n2 : 5.94 103.63 6.48 0.00 0.00 1024332.43 20467.40 2594015.70 00:06:57.169 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0x8000 length 0x8000 00:06:57.169 Nvme2n2 : 5.64 158.73 9.92 0.00 0.00 731258.20 86305.87 729163.62 00:06:57.169 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0x0 length 0x8000 00:06:57.169 Nvme2n3 : 6.21 168.23 10.51 0.00 0.00 600478.32 8267.62 2619826.81 00:06:57.169 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0x8000 length 0x8000 00:06:57.169 Nvme2n3 : 5.71 168.08 10.50 0.00 0.00 677207.64 26012.75 751748.33 00:06:57.169 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0x0 length 0x2000 00:06:57.169 Nvme3n1 : 6.45 289.45 18.09 0.00 0.00 333939.72 551.38 2361715.79 00:06:57.169 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:57.169 Verification LBA range: start 0x2000 length 0x2000 00:06:57.169 Nvme3n1 : 5.72 178.97 11.19 0.00 0.00 622396.22 1247.70 825955.25 00:06:57.169 =================================================================================================================== 00:06:57.169 Total : 1789.68 111.86 0.00 0.00 750676.90 551.38 2619826.81 00:06:59.085 00:06:59.085 real 0m9.226s 00:06:59.085 user 0m17.319s 00:06:59.085 sys 0m0.237s 00:06:59.085 03:33:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:59.085 ************************************ 00:06:59.085 03:33:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:59.085 END TEST bdev_verify_big_io 00:06:59.085 ************************************ 00:06:59.085 03:33:51 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.085 03:33:51 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:06:59.085 03:33:51 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:59.085 03:33:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:59.085 ************************************ 00:06:59.085 START TEST bdev_write_zeroes 00:06:59.085 ************************************ 00:06:59.085 03:33:51 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:59.085 [2024-10-01 03:33:51.404301] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:06:59.085 [2024-10-01 03:33:51.404403] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61064 ] 00:06:59.085 [2024-10-01 03:33:51.550261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.346 [2024-10-01 03:33:51.728533] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.917 Running I/O for 1 seconds... 00:07:00.852 65984.00 IOPS, 257.75 MiB/s 00:07:00.852 Latency(us) 00:07:00.852 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:00.852 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.852 Nvme0n1 : 1.02 10971.73 42.86 0.00 0.00 11643.56 5721.80 23290.49 00:07:00.852 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.852 Nvme1n1 : 1.02 10959.22 42.81 0.00 0.00 11642.61 8418.86 22887.19 00:07:00.852 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.852 Nvme2n1 : 1.02 10946.78 42.76 0.00 0.00 11603.72 8418.86 20064.10 00:07:00.852 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.852 Nvme2n2 : 1.02 10934.31 42.71 0.00 0.00 11577.41 8418.86 19257.50 00:07:00.852 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.852 Nvme2n3 : 1.03 10921.99 42.66 0.00 0.00 11551.10 8519.68 18753.38 00:07:00.852 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:00.852 Nvme3n1 : 1.03 10847.36 42.37 0.00 0.00 11599.83 8267.62 20568.22 00:07:00.852 =================================================================================================================== 00:07:00.852 Total : 65581.39 256.18 0.00 0.00 11603.04 5721.80 23290.49 00:07:01.821 00:07:01.821 real 0m2.806s 00:07:01.821 user 0m2.507s 00:07:01.821 sys 0m0.183s 00:07:01.821 03:33:54 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.821 03:33:54 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:01.821 ************************************ 00:07:01.821 END TEST bdev_write_zeroes 00:07:01.821 ************************************ 00:07:01.821 03:33:54 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.821 03:33:54 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:01.821 03:33:54 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.821 03:33:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:01.821 ************************************ 00:07:01.821 START TEST bdev_json_nonenclosed 00:07:01.821 ************************************ 00:07:01.821 03:33:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:01.821 [2024-10-01 03:33:54.261380] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:01.821 [2024-10-01 03:33:54.261507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61117 ] 00:07:02.079 [2024-10-01 03:33:54.409307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.079 [2024-10-01 03:33:54.588086] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.079 [2024-10-01 03:33:54.588159] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:02.079 [2024-10-01 03:33:54.588175] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:02.079 [2024-10-01 03:33:54.588185] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.337 00:07:02.337 real 0m0.672s 00:07:02.337 user 0m0.465s 00:07:02.337 sys 0m0.102s 00:07:02.337 03:33:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.337 03:33:54 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:02.337 ************************************ 00:07:02.337 END TEST bdev_json_nonenclosed 00:07:02.337 ************************************ 00:07:02.596 03:33:54 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:02.596 03:33:54 blockdev_nvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:02.596 03:33:54 blockdev_nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.596 03:33:54 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:02.596 ************************************ 00:07:02.596 START TEST bdev_json_nonarray 00:07:02.596 ************************************ 00:07:02.596 03:33:54 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:02.596 [2024-10-01 03:33:54.976882] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:02.596 [2024-10-01 03:33:54.976993] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61148 ] 00:07:02.596 [2024-10-01 03:33:55.126712] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.854 [2024-10-01 03:33:55.301290] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.854 [2024-10-01 03:33:55.301375] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:02.854 [2024-10-01 03:33:55.301392] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:02.854 [2024-10-01 03:33:55.301401] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:03.113 00:07:03.113 real 0m0.676s 00:07:03.113 user 0m0.467s 00:07:03.113 sys 0m0.105s 00:07:03.113 03:33:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.113 ************************************ 00:07:03.113 END TEST bdev_json_nonarray 00:07:03.113 ************************************ 00:07:03.113 03:33:55 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:03.113 03:33:55 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:07:03.113 03:33:55 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:07:03.113 03:33:55 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:07:03.113 03:33:55 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:03.113 03:33:55 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:07:03.113 03:33:55 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:03.113 03:33:55 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:03.113 03:33:55 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:07:03.113 03:33:55 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:07:03.113 03:33:55 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:07:03.113 03:33:55 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:07:03.113 00:07:03.113 real 0m40.435s 00:07:03.113 user 1m0.701s 00:07:03.113 sys 0m5.783s 00:07:03.113 03:33:55 blockdev_nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:03.113 ************************************ 00:07:03.113 END TEST blockdev_nvme 00:07:03.113 ************************************ 00:07:03.113 03:33:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:03.113 03:33:55 -- spdk/autotest.sh@209 -- # uname -s 00:07:03.113 03:33:55 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:07:03.113 03:33:55 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:03.113 03:33:55 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:03.113 03:33:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:03.113 03:33:55 -- common/autotest_common.sh@10 -- # set +x 00:07:03.113 ************************************ 00:07:03.113 START TEST blockdev_nvme_gpt 00:07:03.113 ************************************ 00:07:03.113 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:07:03.371 * Looking for test storage... 00:07:03.371 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:03.371 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:03.371 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lcov --version 00:07:03.371 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:03.371 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:03.371 03:33:55 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:07:03.371 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:03.371 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:03.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.371 --rc genhtml_branch_coverage=1 00:07:03.371 --rc genhtml_function_coverage=1 00:07:03.371 --rc genhtml_legend=1 00:07:03.371 --rc geninfo_all_blocks=1 00:07:03.371 --rc geninfo_unexecuted_blocks=1 00:07:03.371 00:07:03.371 ' 00:07:03.371 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:03.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.371 --rc genhtml_branch_coverage=1 00:07:03.371 --rc genhtml_function_coverage=1 00:07:03.371 --rc genhtml_legend=1 00:07:03.371 --rc geninfo_all_blocks=1 00:07:03.371 --rc geninfo_unexecuted_blocks=1 00:07:03.371 00:07:03.371 ' 00:07:03.371 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:03.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.371 --rc genhtml_branch_coverage=1 00:07:03.371 --rc genhtml_function_coverage=1 00:07:03.371 --rc genhtml_legend=1 00:07:03.371 --rc geninfo_all_blocks=1 00:07:03.371 --rc geninfo_unexecuted_blocks=1 00:07:03.371 00:07:03.371 ' 00:07:03.371 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:03.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:03.371 --rc genhtml_branch_coverage=1 00:07:03.371 --rc genhtml_function_coverage=1 00:07:03.371 --rc genhtml_legend=1 00:07:03.371 --rc geninfo_all_blocks=1 00:07:03.371 --rc geninfo_unexecuted_blocks=1 00:07:03.371 00:07:03.371 ' 00:07:03.371 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:03.371 03:33:55 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:07:03.371 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:03.371 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:03.371 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:03.371 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:03.371 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:03.371 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:03.371 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:07:03.371 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:03.371 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61232 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61232 00:07:03.372 03:33:55 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:03.372 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@831 -- # '[' -z 61232 ']' 00:07:03.372 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:03.372 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.372 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:03.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:03.372 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.372 03:33:55 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:03.372 [2024-10-01 03:33:55.894982] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:03.372 [2024-10-01 03:33:55.895113] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61232 ] 00:07:03.631 [2024-10-01 03:33:56.045770] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.890 [2024-10-01 03:33:56.225476] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.455 03:33:56 blockdev_nvme_gpt -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.456 03:33:56 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # return 0 00:07:04.456 03:33:56 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:04.456 03:33:56 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:07:04.456 03:33:56 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:04.713 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:04.713 Waiting for block devices as requested 00:07:04.713 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:04.971 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:04.971 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:04.971 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:10.233 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:10.233 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:10.233 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:07:10.234 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:07:10.234 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:07:10.234 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:10.234 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:10.234 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:07:10.234 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:07:10.234 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:07:10.234 03:34:02 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:07:10.234 BYT; 00:07:10.234 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:07:10.234 BYT; 00:07:10.234 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:10.234 03:34:02 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:07:10.234 03:34:02 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:07:11.167 The operation has completed successfully. 00:07:11.167 03:34:03 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:07:12.101 The operation has completed successfully. 00:07:12.101 03:34:04 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:12.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:13.233 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.233 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.233 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.233 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:13.233 03:34:05 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:07:13.233 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.233 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.233 [] 00:07:13.233 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.233 03:34:05 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:07:13.233 03:34:05 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:07:13.233 03:34:05 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:13.233 03:34:05 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:13.233 03:34:05 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:13.233 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.233 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.491 03:34:05 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.491 03:34:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:07:13.491 03:34:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.491 03:34:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.491 03:34:05 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.491 03:34:05 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.491 03:34:06 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:13.491 03:34:06 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:13.491 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:13.491 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:13.491 03:34:06 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:13.753 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:13.753 03:34:06 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:13.753 03:34:06 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:13.754 03:34:06 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "25fb42a7-1d48-4730-821f-60575b381822"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "25fb42a7-1d48-4730-821f-60575b381822",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "49a571fd-203f-4f67-8862-f6f61e8cedd7"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "49a571fd-203f-4f67-8862-f6f61e8cedd7",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "9bf54303-1ee5-4a51-be69-f581e94991fd"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9bf54303-1ee5-4a51-be69-f581e94991fd",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "03e74c3f-ce9c-4ac5-b615-412e825c9424"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "03e74c3f-ce9c-4ac5-b615-412e825c9424",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "1b444533-5a71-44ba-8194-affd46c9baea"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "1b444533-5a71-44ba-8194-affd46c9baea",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:13.754 03:34:06 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:13.754 03:34:06 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:13.754 03:34:06 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:13.754 03:34:06 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61232 00:07:13.754 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@950 -- # '[' -z 61232 ']' 00:07:13.754 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # kill -0 61232 00:07:13.754 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # uname 00:07:13.754 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:13.754 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61232 00:07:13.754 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:13.754 killing process with pid 61232 00:07:13.754 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:13.754 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61232' 00:07:13.754 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@969 -- # kill 61232 00:07:13.754 03:34:06 blockdev_nvme_gpt -- common/autotest_common.sh@974 -- # wait 61232 00:07:15.128 03:34:07 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:15.128 03:34:07 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:15.128 03:34:07 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:07:15.128 03:34:07 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.128 03:34:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:15.128 ************************************ 00:07:15.128 START TEST bdev_hello_world 00:07:15.128 ************************************ 00:07:15.128 03:34:07 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:15.128 [2024-10-01 03:34:07.471110] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:15.128 [2024-10-01 03:34:07.471239] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61849 ] 00:07:15.128 [2024-10-01 03:34:07.620049] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.386 [2024-10-01 03:34:07.775475] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.952 [2024-10-01 03:34:08.271118] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:15.952 [2024-10-01 03:34:08.271160] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:15.952 [2024-10-01 03:34:08.271176] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:15.952 [2024-10-01 03:34:08.273145] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:15.952 [2024-10-01 03:34:08.273538] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:15.952 [2024-10-01 03:34:08.273570] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:15.952 [2024-10-01 03:34:08.273750] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:15.952 00:07:15.952 [2024-10-01 03:34:08.273770] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:16.518 00:07:16.518 real 0m1.524s 00:07:16.518 user 0m1.222s 00:07:16.518 sys 0m0.196s 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:16.518 ************************************ 00:07:16.518 END TEST bdev_hello_world 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:16.518 ************************************ 00:07:16.518 03:34:08 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:16.518 03:34:08 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:16.518 03:34:08 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:16.518 03:34:08 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:16.518 ************************************ 00:07:16.518 START TEST bdev_bounds 00:07:16.518 ************************************ 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:07:16.518 Process bdevio pid: 61886 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61886 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61886' 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61886 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 61886 ']' 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:16.518 03:34:08 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:16.518 [2024-10-01 03:34:09.030578] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:16.518 [2024-10-01 03:34:09.030695] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61886 ] 00:07:16.776 [2024-10-01 03:34:09.176990] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:17.035 [2024-10-01 03:34:09.326536] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:17.035 [2024-10-01 03:34:09.326646] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:07:17.035 [2024-10-01 03:34:09.326937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.601 03:34:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.601 03:34:09 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:07:17.601 03:34:09 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:17.601 I/O targets: 00:07:17.601 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:17.601 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:07:17.601 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:07:17.601 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.601 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.601 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:17.601 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:17.601 00:07:17.601 00:07:17.601 CUnit - A unit testing framework for C - Version 2.1-3 00:07:17.601 http://cunit.sourceforge.net/ 00:07:17.601 00:07:17.601 00:07:17.601 Suite: bdevio tests on: Nvme3n1 00:07:17.601 Test: blockdev write read block ...passed 00:07:17.601 Test: blockdev write zeroes read block ...passed 00:07:17.601 Test: blockdev write zeroes read no split ...passed 00:07:17.601 Test: blockdev write zeroes read split ...passed 00:07:17.601 Test: blockdev write zeroes read split partial ...passed 00:07:17.601 Test: blockdev reset ...[2024-10-01 03:34:10.007454] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0] resetting controller 00:07:17.601 [2024-10-01 03:34:10.010495] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:17.601 passed 00:07:17.601 Test: blockdev write read 8 blocks ...passed 00:07:17.601 Test: blockdev write read size > 128k ...passed 00:07:17.601 Test: blockdev write read invalid size ...passed 00:07:17.601 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.601 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.601 Test: blockdev write read max offset ...passed 00:07:17.601 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.601 Test: blockdev writev readv 8 blocks ...passed 00:07:17.601 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.601 Test: blockdev writev readv block ...passed 00:07:17.601 Test: blockdev writev readv size > 128k ...passed 00:07:17.601 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.601 Test: blockdev comparev and writev ...[2024-10-01 03:34:10.016925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b4a06000 len:0x1000 00:07:17.601 [2024-10-01 03:34:10.016980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.601 passed 00:07:17.601 Test: blockdev nvme passthru rw ...passed 00:07:17.601 Test: blockdev nvme passthru vendor specific ...[2024-10-01 03:34:10.018204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.601 [2024-10-01 03:34:10.018337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.601 passed 00:07:17.601 Test: blockdev nvme admin passthru ...passed 00:07:17.601 Test: blockdev copy ...passed 00:07:17.601 Suite: bdevio tests on: Nvme2n3 00:07:17.601 Test: blockdev write read block ...passed 00:07:17.601 Test: blockdev write zeroes read block ...passed 00:07:17.601 Test: blockdev write zeroes read no split ...passed 00:07:17.601 Test: blockdev write zeroes read split ...passed 00:07:17.601 Test: blockdev write zeroes read split partial ...passed 00:07:17.601 Test: blockdev reset ...[2024-10-01 03:34:10.077170] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:17.601 [2024-10-01 03:34:10.080393] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:17.601 passed 00:07:17.601 Test: blockdev write read 8 blocks ...passed 00:07:17.601 Test: blockdev write read size > 128k ...passed 00:07:17.601 Test: blockdev write read invalid size ...passed 00:07:17.601 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.601 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.601 Test: blockdev write read max offset ...passed 00:07:17.601 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.601 Test: blockdev writev readv 8 blocks ...passed 00:07:17.601 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.601 Test: blockdev writev readv block ...passed 00:07:17.601 Test: blockdev writev readv size > 128k ...passed 00:07:17.601 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.601 Test: blockdev comparev and writev ...[2024-10-01 03:34:10.087297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c123c000 len:0x1000 00:07:17.601 [2024-10-01 03:34:10.087355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.601 passed 00:07:17.601 Test: blockdev nvme passthru rw ...passed 00:07:17.601 Test: blockdev nvme passthru vendor specific ...[2024-10-01 03:34:10.088189] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.601 passed 00:07:17.601 Test: blockdev nvme admin passthru ...[2024-10-01 03:34:10.088236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.601 passed 00:07:17.601 Test: blockdev copy ...passed 00:07:17.601 Suite: bdevio tests on: Nvme2n2 00:07:17.601 Test: blockdev write read block ...passed 00:07:17.601 Test: blockdev write zeroes read block ...passed 00:07:17.601 Test: blockdev write zeroes read no split ...passed 00:07:17.601 Test: blockdev write zeroes read split ...passed 00:07:17.601 Test: blockdev write zeroes read split partial ...passed 00:07:17.601 Test: blockdev reset ...[2024-10-01 03:34:10.146466] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:17.601 [2024-10-01 03:34:10.149520] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:17.859 passed 00:07:17.859 Test: blockdev write read 8 blocks ...passed 00:07:17.859 Test: blockdev write read size > 128k ...passed 00:07:17.859 Test: blockdev write read invalid size ...passed 00:07:17.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.859 Test: blockdev write read max offset ...passed 00:07:17.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.859 Test: blockdev writev readv 8 blocks ...passed 00:07:17.859 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.859 Test: blockdev writev readv block ...passed 00:07:17.859 Test: blockdev writev readv size > 128k ...passed 00:07:17.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.859 Test: blockdev comparev and writev ...[2024-10-01 03:34:10.158997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1236000 len:0x1000 00:07:17.859 [2024-10-01 03:34:10.159068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.859 passed 00:07:17.859 Test: blockdev nvme passthru rw ...passed 00:07:17.859 Test: blockdev nvme passthru vendor specific ...[2024-10-01 03:34:10.160650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.859 [2024-10-01 03:34:10.160773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.859 passed 00:07:17.859 Test: blockdev nvme admin passthru ...passed 00:07:17.859 Test: blockdev copy ...passed 00:07:17.859 Suite: bdevio tests on: Nvme2n1 00:07:17.859 Test: blockdev write read block ...passed 00:07:17.859 Test: blockdev write zeroes read block ...passed 00:07:17.859 Test: blockdev write zeroes read no split ...passed 00:07:17.859 Test: blockdev write zeroes read split ...passed 00:07:17.859 Test: blockdev write zeroes read split partial ...passed 00:07:17.859 Test: blockdev reset ...[2024-10-01 03:34:10.222618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0] resetting controller 00:07:17.859 [2024-10-01 03:34:10.225735] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:17.859 passed 00:07:17.859 Test: blockdev write read 8 blocks ...passed 00:07:17.859 Test: blockdev write read size > 128k ...passed 00:07:17.859 Test: blockdev write read invalid size ...passed 00:07:17.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.859 Test: blockdev write read max offset ...passed 00:07:17.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.859 Test: blockdev writev readv 8 blocks ...passed 00:07:17.859 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.859 Test: blockdev writev readv block ...passed 00:07:17.859 Test: blockdev writev readv size > 128k ...passed 00:07:17.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.859 Test: blockdev comparev and writev ...[2024-10-01 03:34:10.232379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c1232000 len:0x1000 00:07:17.859 [2024-10-01 03:34:10.232422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.859 passed 00:07:17.859 Test: blockdev nvme passthru rw ...passed 00:07:17.859 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.859 Test: blockdev nvme admin passthru ...[2024-10-01 03:34:10.233193] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:17.859 [2024-10-01 03:34:10.233224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:17.859 passed 00:07:17.859 Test: blockdev copy ...passed 00:07:17.859 Suite: bdevio tests on: Nvme1n1p2 00:07:17.859 Test: blockdev write read block ...passed 00:07:17.859 Test: blockdev write zeroes read block ...passed 00:07:17.859 Test: blockdev write zeroes read no split ...passed 00:07:17.859 Test: blockdev write zeroes read split ...passed 00:07:17.859 Test: blockdev write zeroes read split partial ...passed 00:07:17.859 Test: blockdev reset ...[2024-10-01 03:34:10.296338] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:17.859 [2024-10-01 03:34:10.299206] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:17.859 passed 00:07:17.859 Test: blockdev write read 8 blocks ...passed 00:07:17.859 Test: blockdev write read size > 128k ...passed 00:07:17.859 Test: blockdev write read invalid size ...passed 00:07:17.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.859 Test: blockdev write read max offset ...passed 00:07:17.859 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.859 Test: blockdev writev readv 8 blocks ...passed 00:07:17.859 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.859 Test: blockdev writev readv block ...passed 00:07:17.859 Test: blockdev writev readv size > 128k ...passed 00:07:17.859 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.859 Test: blockdev comparev and writev ...[2024-10-01 03:34:10.307720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2c122e000 len:0x1000 00:07:17.859 [2024-10-01 03:34:10.307768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.859 passed 00:07:17.859 Test: blockdev nvme passthru rw ...passed 00:07:17.859 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.859 Test: blockdev nvme admin passthru ...passed 00:07:17.859 Test: blockdev copy ...passed 00:07:17.859 Suite: bdevio tests on: Nvme1n1p1 00:07:17.859 Test: blockdev write read block ...passed 00:07:17.859 Test: blockdev write zeroes read block ...passed 00:07:17.859 Test: blockdev write zeroes read no split ...passed 00:07:17.859 Test: blockdev write zeroes read split ...passed 00:07:17.859 Test: blockdev write zeroes read split partial ...passed 00:07:17.859 Test: blockdev reset ...[2024-10-01 03:34:10.373938] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0] resetting controller 00:07:17.860 [2024-10-01 03:34:10.376707] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:17.860 passed 00:07:17.860 Test: blockdev write read 8 blocks ...passed 00:07:17.860 Test: blockdev write read size > 128k ...passed 00:07:17.860 Test: blockdev write read invalid size ...passed 00:07:17.860 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:17.860 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:17.860 Test: blockdev write read max offset ...passed 00:07:17.860 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:17.860 Test: blockdev writev readv 8 blocks ...passed 00:07:17.860 Test: blockdev writev readv 30 x 1block ...passed 00:07:17.860 Test: blockdev writev readv block ...passed 00:07:17.860 Test: blockdev writev readv size > 128k ...passed 00:07:17.860 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:17.860 Test: blockdev comparev and writev ...[2024-10-01 03:34:10.385418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c1e0e000 len:0x1000 00:07:17.860 [2024-10-01 03:34:10.385462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:17.860 passed 00:07:17.860 Test: blockdev nvme passthru rw ...passed 00:07:17.860 Test: blockdev nvme passthru vendor specific ...passed 00:07:17.860 Test: blockdev nvme admin passthru ...passed 00:07:17.860 Test: blockdev copy ...passed 00:07:17.860 Suite: bdevio tests on: Nvme0n1 00:07:17.860 Test: blockdev write read block ...passed 00:07:17.860 Test: blockdev write zeroes read block ...passed 00:07:17.860 Test: blockdev write zeroes read no split ...passed 00:07:18.118 Test: blockdev write zeroes read split ...passed 00:07:18.118 Test: blockdev write zeroes read split partial ...passed 00:07:18.118 Test: blockdev reset ...[2024-10-01 03:34:10.431611] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:07:18.118 [2024-10-01 03:34:10.434371] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:18.118 passed 00:07:18.118 Test: blockdev write read 8 blocks ...passed 00:07:18.118 Test: blockdev write read size > 128k ...passed 00:07:18.118 Test: blockdev write read invalid size ...passed 00:07:18.118 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:18.118 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:18.118 Test: blockdev write read max offset ...passed 00:07:18.118 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:18.118 Test: blockdev writev readv 8 blocks ...passed 00:07:18.118 Test: blockdev writev readv 30 x 1block ...passed 00:07:18.118 Test: blockdev writev readv block ...passed 00:07:18.118 Test: blockdev writev readv size > 128k ...passed 00:07:18.118 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:18.118 Test: blockdev comparev and writev ...[2024-10-01 03:34:10.439325] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:18.118 separate metadata which is not supported yet. 00:07:18.118 passed 00:07:18.118 Test: blockdev nvme passthru rw ...passed 00:07:18.118 Test: blockdev nvme passthru vendor specific ...passed 00:07:18.118 Test: blockdev nvme admin passthru ...[2024-10-01 03:34:10.439680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:18.118 [2024-10-01 03:34:10.439728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:18.118 passed 00:07:18.118 Test: blockdev copy ...passed 00:07:18.118 00:07:18.118 Run Summary: Type Total Ran Passed Failed Inactive 00:07:18.118 suites 7 7 n/a 0 0 00:07:18.118 tests 161 161 161 0 0 00:07:18.118 asserts 1025 1025 1025 0 n/a 00:07:18.118 00:07:18.118 Elapsed time = 1.247 seconds 00:07:18.118 0 00:07:18.118 03:34:10 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61886 00:07:18.118 03:34:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 61886 ']' 00:07:18.118 03:34:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 61886 00:07:18.118 03:34:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:07:18.118 03:34:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.118 03:34:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61886 00:07:18.118 03:34:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:18.118 03:34:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:18.118 killing process with pid 61886 00:07:18.118 03:34:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61886' 00:07:18.118 03:34:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@969 -- # kill 61886 00:07:18.118 03:34:10 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@974 -- # wait 61886 00:07:18.684 03:34:11 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:18.684 00:07:18.684 real 0m2.232s 00:07:18.684 user 0m5.511s 00:07:18.684 sys 0m0.275s 00:07:18.684 03:34:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:18.684 03:34:11 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:18.684 ************************************ 00:07:18.684 END TEST bdev_bounds 00:07:18.684 ************************************ 00:07:18.942 03:34:11 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:18.942 03:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:18.942 03:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:18.942 03:34:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:18.942 ************************************ 00:07:18.942 START TEST bdev_nbd 00:07:18.942 ************************************ 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61940 00:07:18.942 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:18.943 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61940 /var/tmp/spdk-nbd.sock 00:07:18.943 03:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 61940 ']' 00:07:18.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:18.943 03:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:18.943 03:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:18.943 03:34:11 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:18.943 03:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:18.943 03:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:18.943 03:34:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:18.943 [2024-10-01 03:34:11.312476] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:18.943 [2024-10-01 03:34:11.312594] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:18.943 [2024-10-01 03:34:11.462425] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.201 [2024-10-01 03:34:11.640325] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:19.770 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.031 1+0 records in 00:07:20.031 1+0 records out 00:07:20.031 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000806561 s, 5.1 MB/s 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:20.031 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.293 1+0 records in 00:07:20.293 1+0 records out 00:07:20.293 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000649845 s, 6.3 MB/s 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:20.293 03:34:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:07:20.554 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:20.554 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:20.554 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:20.554 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:07:20.554 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.555 1+0 records in 00:07:20.555 1+0 records out 00:07:20.555 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000541295 s, 7.6 MB/s 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:20.555 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:20.816 1+0 records in 00:07:20.816 1+0 records out 00:07:20.816 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000420458 s, 9.7 MB/s 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:20.816 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:21.077 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:21.077 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.078 1+0 records in 00:07:21.078 1+0 records out 00:07:21.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000546093 s, 7.5 MB/s 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:21.078 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:21.337 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:21.337 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:21.337 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:21.337 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:07:21.337 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:21.337 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.337 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.338 1+0 records in 00:07:21.338 1+0 records out 00:07:21.338 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000500369 s, 8.2 MB/s 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:21.338 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd6 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd6 /proc/partitions 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:21.599 1+0 records in 00:07:21.599 1+0 records out 00:07:21.599 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000694434 s, 5.9 MB/s 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:07:21.599 03:34:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:21.860 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd0", 00:07:21.860 "bdev_name": "Nvme0n1" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd1", 00:07:21.860 "bdev_name": "Nvme1n1p1" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd2", 00:07:21.860 "bdev_name": "Nvme1n1p2" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd3", 00:07:21.860 "bdev_name": "Nvme2n1" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd4", 00:07:21.860 "bdev_name": "Nvme2n2" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd5", 00:07:21.860 "bdev_name": "Nvme2n3" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd6", 00:07:21.860 "bdev_name": "Nvme3n1" 00:07:21.860 } 00:07:21.860 ]' 00:07:21.860 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:21.860 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:21.860 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd0", 00:07:21.860 "bdev_name": "Nvme0n1" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd1", 00:07:21.860 "bdev_name": "Nvme1n1p1" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd2", 00:07:21.860 "bdev_name": "Nvme1n1p2" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd3", 00:07:21.860 "bdev_name": "Nvme2n1" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd4", 00:07:21.860 "bdev_name": "Nvme2n2" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd5", 00:07:21.860 "bdev_name": "Nvme2n3" 00:07:21.860 }, 00:07:21.860 { 00:07:21.860 "nbd_device": "/dev/nbd6", 00:07:21.860 "bdev_name": "Nvme3n1" 00:07:21.860 } 00:07:21.860 ]' 00:07:21.861 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:07:21.861 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.861 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:07:21.861 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:21.861 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:21.861 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:21.861 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.121 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:22.382 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:22.382 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:22.382 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:22.382 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.382 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.382 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:22.382 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.382 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.382 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.382 03:34:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:22.641 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:22.641 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:22.641 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:22.641 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.641 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.641 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:22.641 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.641 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.641 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.641 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:22.899 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:22.899 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:22.899 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:22.899 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.899 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.899 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:22.899 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:22.899 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.899 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.899 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:23.158 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:23.158 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:23.158 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:23.158 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.158 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.158 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:23.158 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.158 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.158 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.159 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.418 03:34:15 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:23.677 /dev/nbd0 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.677 1+0 records in 00:07:23.677 1+0 records out 00:07:23.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000588437 s, 7.0 MB/s 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.677 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:07:23.938 /dev/nbd1 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:23.938 1+0 records in 00:07:23.938 1+0 records out 00:07:23.938 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000698757 s, 5.9 MB/s 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:23.938 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:23.939 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:07:24.201 /dev/nbd10 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.201 1+0 records in 00:07:24.201 1+0 records out 00:07:24.201 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110364 s, 3.7 MB/s 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:24.201 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:07:24.463 /dev/nbd11 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.463 1+0 records in 00:07:24.463 1+0 records out 00:07:24.463 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000938149 s, 4.4 MB/s 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:24.463 03:34:16 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:07:24.725 /dev/nbd12 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.725 1+0 records in 00:07:24.725 1+0 records out 00:07:24.725 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00105432 s, 3.9 MB/s 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:24.725 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:07:24.986 /dev/nbd13 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:24.986 1+0 records in 00:07:24.986 1+0 records out 00:07:24.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106717 s, 3.8 MB/s 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:24.986 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:07:25.247 /dev/nbd14 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd14 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd14 /proc/partitions 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.247 1+0 records in 00:07:25.247 1+0 records out 00:07:25.247 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109774 s, 3.7 MB/s 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.247 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:25.507 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd0", 00:07:25.507 "bdev_name": "Nvme0n1" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd1", 00:07:25.507 "bdev_name": "Nvme1n1p1" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd10", 00:07:25.507 "bdev_name": "Nvme1n1p2" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd11", 00:07:25.507 "bdev_name": "Nvme2n1" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd12", 00:07:25.507 "bdev_name": "Nvme2n2" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd13", 00:07:25.507 "bdev_name": "Nvme2n3" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd14", 00:07:25.507 "bdev_name": "Nvme3n1" 00:07:25.507 } 00:07:25.507 ]' 00:07:25.507 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd0", 00:07:25.507 "bdev_name": "Nvme0n1" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd1", 00:07:25.507 "bdev_name": "Nvme1n1p1" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd10", 00:07:25.507 "bdev_name": "Nvme1n1p2" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd11", 00:07:25.507 "bdev_name": "Nvme2n1" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd12", 00:07:25.507 "bdev_name": "Nvme2n2" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd13", 00:07:25.507 "bdev_name": "Nvme2n3" 00:07:25.507 }, 00:07:25.507 { 00:07:25.507 "nbd_device": "/dev/nbd14", 00:07:25.507 "bdev_name": "Nvme3n1" 00:07:25.507 } 00:07:25.507 ]' 00:07:25.507 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:25.507 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:25.507 /dev/nbd1 00:07:25.507 /dev/nbd10 00:07:25.507 /dev/nbd11 00:07:25.507 /dev/nbd12 00:07:25.507 /dev/nbd13 00:07:25.507 /dev/nbd14' 00:07:25.507 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:25.507 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:25.507 /dev/nbd1 00:07:25.507 /dev/nbd10 00:07:25.507 /dev/nbd11 00:07:25.507 /dev/nbd12 00:07:25.507 /dev/nbd13 00:07:25.507 /dev/nbd14' 00:07:25.507 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:07:25.507 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:07:25.507 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:07:25.508 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:07:25.508 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:07:25.508 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:25.508 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:25.508 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:25.508 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:25.508 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:25.508 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:25.508 256+0 records in 00:07:25.508 256+0 records out 00:07:25.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.005704 s, 184 MB/s 00:07:25.508 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.508 03:34:17 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:25.508 256+0 records in 00:07:25.508 256+0 records out 00:07:25.508 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.143878 s, 7.3 MB/s 00:07:25.508 03:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.508 03:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:25.785 256+0 records in 00:07:25.785 256+0 records out 00:07:25.785 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.188633 s, 5.6 MB/s 00:07:25.785 03:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:25.785 03:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:26.072 256+0 records in 00:07:26.072 256+0 records out 00:07:26.072 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.230216 s, 4.6 MB/s 00:07:26.072 03:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.072 03:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:26.332 256+0 records in 00:07:26.332 256+0 records out 00:07:26.332 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233238 s, 4.5 MB/s 00:07:26.332 03:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.332 03:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:26.599 256+0 records in 00:07:26.599 256+0 records out 00:07:26.599 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.229528 s, 4.6 MB/s 00:07:26.599 03:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.599 03:34:18 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:26.861 256+0 records in 00:07:26.861 256+0 records out 00:07:26.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.239079 s, 4.4 MB/s 00:07:26.861 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:26.861 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:26.861 256+0 records in 00:07:26.861 256+0 records out 00:07:26.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.210206 s, 5.0 MB/s 00:07:26.861 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:07:26.861 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:26.861 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:26.861 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:26.861 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:26.861 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.122 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.382 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.641 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:27.641 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.641 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.641 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.641 03:34:19 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:27.641 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:27.642 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:27.642 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:27.642 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.642 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.642 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:27.642 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.642 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.642 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.642 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:27.901 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:27.901 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:27.901 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:27.901 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:27.901 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:27.901 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:27.901 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:27.901 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:27.901 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:27.901 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:28.163 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:28.163 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:28.163 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:28.163 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.163 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.163 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:28.163 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.163 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.163 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.163 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:28.423 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:28.423 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:28.423 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:28.423 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.423 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.423 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:28.423 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.423 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.423 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.423 03:34:20 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:28.683 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:28.683 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:28.683 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:28.683 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.683 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.683 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:28.683 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.683 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.683 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:28.683 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.683 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:28.944 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:29.205 malloc_lvol_verify 00:07:29.205 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:29.466 4fab75d9-4186-46cf-8408-c0609bed0b3f 00:07:29.466 03:34:21 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:29.726 d94b4609-46e1-4682-a29c-2c5d16679c23 00:07:29.726 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:29.726 /dev/nbd0 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:29.988 mke2fs 1.47.0 (5-Feb-2023) 00:07:29.988 Discarding device blocks: 0/4096 done 00:07:29.988 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:29.988 00:07:29.988 Allocating group tables: 0/1 done 00:07:29.988 Writing inode tables: 0/1 done 00:07:29.988 Creating journal (1024 blocks): done 00:07:29.988 Writing superblocks and filesystem accounting information: 0/1 done 00:07:29.988 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:29.988 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:30.249 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:30.249 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:30.249 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.249 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.249 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:30.249 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:30.249 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.250 03:34:22 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61940 00:07:30.250 03:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 61940 ']' 00:07:30.250 03:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 61940 00:07:30.250 03:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:07:30.250 03:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:30.250 03:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61940 00:07:30.250 killing process with pid 61940 00:07:30.250 03:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:30.250 03:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:30.250 03:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61940' 00:07:30.250 03:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@969 -- # kill 61940 00:07:30.250 03:34:22 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@974 -- # wait 61940 00:07:31.193 ************************************ 00:07:31.193 END TEST bdev_nbd 00:07:31.193 ************************************ 00:07:31.193 03:34:23 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:31.193 00:07:31.193 real 0m12.390s 00:07:31.193 user 0m16.832s 00:07:31.193 sys 0m3.980s 00:07:31.193 03:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:31.193 03:34:23 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:31.193 03:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:31.193 03:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:07:31.193 03:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:07:31.193 skipping fio tests on NVMe due to multi-ns failures. 00:07:31.193 03:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:31.193 03:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:31.193 03:34:23 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:31.193 03:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:31.193 03:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:31.193 03:34:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:31.193 ************************************ 00:07:31.193 START TEST bdev_verify 00:07:31.193 ************************************ 00:07:31.193 03:34:23 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:31.453 [2024-10-01 03:34:23.781156] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:31.453 [2024-10-01 03:34:23.781331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62366 ] 00:07:31.453 [2024-10-01 03:34:23.940307] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:31.715 [2024-10-01 03:34:24.185791] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.715 [2024-10-01 03:34:24.185909] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.289 Running I/O for 5 seconds... 00:07:37.503 19584.00 IOPS, 76.50 MiB/s 19328.00 IOPS, 75.50 MiB/s 19882.67 IOPS, 77.67 MiB/s 19664.00 IOPS, 76.81 MiB/s 20108.80 IOPS, 78.55 MiB/s 00:07:37.503 Latency(us) 00:07:37.504 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.504 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x0 length 0xbd0bd 00:07:37.504 Nvme0n1 : 5.04 1421.65 5.55 0.00 0.00 89618.61 14720.39 92758.65 00:07:37.504 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:37.504 Nvme0n1 : 5.08 1399.28 5.47 0.00 0.00 90959.27 11846.89 87112.47 00:07:37.504 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x0 length 0x4ff80 00:07:37.504 Nvme1n1p1 : 5.08 1424.12 5.56 0.00 0.00 89187.14 8872.57 81466.29 00:07:37.504 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x4ff80 length 0x4ff80 00:07:37.504 Nvme1n1p1 : 5.09 1407.15 5.50 0.00 0.00 90633.37 12098.95 85095.98 00:07:37.504 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x0 length 0x4ff7f 00:07:37.504 Nvme1n1p2 : 5.09 1432.46 5.60 0.00 0.00 88755.05 12502.25 77030.01 00:07:37.504 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:07:37.504 Nvme1n1p2 : 5.10 1405.87 5.49 0.00 0.00 90518.21 14720.39 83079.48 00:07:37.504 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x0 length 0x80000 00:07:37.504 Nvme2n1 : 5.09 1432.10 5.59 0.00 0.00 88561.37 12451.84 72997.02 00:07:37.504 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x80000 length 0x80000 00:07:37.504 Nvme2n1 : 5.10 1405.06 5.49 0.00 0.00 90415.42 16736.89 78239.90 00:07:37.504 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x0 length 0x80000 00:07:37.504 Nvme2n2 : 5.10 1431.41 5.59 0.00 0.00 88426.24 13409.67 73803.62 00:07:37.504 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x80000 length 0x80000 00:07:37.504 Nvme2n2 : 5.10 1404.53 5.49 0.00 0.00 90316.60 17039.36 76223.41 00:07:37.504 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x0 length 0x80000 00:07:37.504 Nvme2n3 : 5.10 1431.07 5.59 0.00 0.00 88315.81 12653.49 75013.51 00:07:37.504 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x80000 length 0x80000 00:07:37.504 Nvme2n3 : 5.10 1404.14 5.48 0.00 0.00 90205.92 17140.18 79853.10 00:07:37.504 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x0 length 0x20000 00:07:37.504 Nvme3n1 : 5.10 1430.39 5.59 0.00 0.00 88217.10 13812.97 77433.30 00:07:37.504 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:37.504 Verification LBA range: start 0x20000 length 0x20000 00:07:37.504 Nvme3n1 : 5.11 1403.76 5.48 0.00 0.00 90084.28 16736.89 87112.47 00:07:37.504 =================================================================================================================== 00:07:37.504 Total : 19833.00 77.47 0.00 0.00 89578.45 8872.57 92758.65 00:07:38.943 00:07:38.943 real 0m7.390s 00:07:38.943 user 0m13.415s 00:07:38.943 sys 0m0.289s 00:07:38.943 03:34:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:38.943 ************************************ 00:07:38.943 03:34:31 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:38.943 END TEST bdev_verify 00:07:38.943 ************************************ 00:07:38.943 03:34:31 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:38.943 03:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:07:38.943 03:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:38.943 03:34:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:38.943 ************************************ 00:07:38.943 START TEST bdev_verify_big_io 00:07:38.943 ************************************ 00:07:38.943 03:34:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:38.943 [2024-10-01 03:34:31.231482] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:38.943 [2024-10-01 03:34:31.231605] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62464 ] 00:07:38.943 [2024-10-01 03:34:31.381792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.202 [2024-10-01 03:34:31.581941] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.202 [2024-10-01 03:34:31.581964] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.774 Running I/O for 5 seconds... 00:07:46.454 1830.00 IOPS, 114.38 MiB/s 3102.50 IOPS, 193.91 MiB/s 00:07:46.454 Latency(us) 00:07:46.454 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:46.454 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.454 Verification LBA range: start 0x0 length 0xbd0b 00:07:46.454 Nvme0n1 : 5.91 73.07 4.57 0.00 0.00 1682886.18 21173.17 1780966.01 00:07:46.454 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.454 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:46.454 Nvme0n1 : 5.77 124.52 7.78 0.00 0.00 980761.49 21374.82 974369.08 00:07:46.454 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.454 Verification LBA range: start 0x0 length 0x4ff8 00:07:46.454 Nvme1n1p1 : 5.91 75.75 4.73 0.00 0.00 1543119.39 31860.58 1690627.15 00:07:46.454 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.454 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:46.454 Nvme1n1p1 : 5.87 117.32 7.33 0.00 0.00 1020662.95 88322.36 1755154.90 00:07:46.454 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.454 Verification LBA range: start 0x0 length 0x4ff7 00:07:46.454 Nvme1n1p2 : 5.97 81.74 5.11 0.00 0.00 1361037.27 21878.94 1432516.14 00:07:46.454 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.454 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:46.454 Nvme1n1p2 : 5.87 117.45 7.34 0.00 0.00 996224.35 88322.36 1780966.01 00:07:46.454 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.454 Verification LBA range: start 0x0 length 0x8000 00:07:46.454 Nvme2n1 : 5.97 85.74 5.36 0.00 0.00 1244406.55 31860.58 1471232.79 00:07:46.455 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.455 Verification LBA range: start 0x8000 length 0x8000 00:07:46.455 Nvme2n1 : 5.88 131.22 8.20 0.00 0.00 868693.38 104857.60 890483.00 00:07:46.455 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.455 Verification LBA range: start 0x0 length 0x8000 00:07:46.455 Nvme2n2 : 6.13 115.05 7.19 0.00 0.00 895207.32 12401.43 1703532.70 00:07:46.455 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.455 Verification LBA range: start 0x8000 length 0x8000 00:07:46.455 Nvme2n2 : 5.88 134.75 8.42 0.00 0.00 834267.17 94775.14 909841.33 00:07:46.455 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.455 Verification LBA range: start 0x0 length 0x8000 00:07:46.455 Nvme2n3 : 6.34 181.78 11.36 0.00 0.00 546172.17 11796.48 1542213.32 00:07:46.455 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.455 Verification LBA range: start 0x8000 length 0x8000 00:07:46.455 Nvme2n3 : 5.89 147.61 9.23 0.00 0.00 754276.25 4436.28 929199.66 00:07:46.455 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:46.455 Verification LBA range: start 0x0 length 0x2000 00:07:46.455 Nvme3n1 : 6.55 309.48 19.34 0.00 0.00 308816.18 152.02 1568024.42 00:07:46.455 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:46.455 Verification LBA range: start 0x2000 length 0x2000 00:07:46.455 Nvme3n1 : 5.90 147.58 9.22 0.00 0.00 734161.74 4713.55 955010.76 00:07:46.455 =================================================================================================================== 00:07:46.455 Total : 1843.06 115.19 0.00 0.00 835346.56 152.02 1780966.01 00:07:48.358 00:07:48.358 real 0m9.445s 00:07:48.358 user 0m17.707s 00:07:48.358 sys 0m0.298s 00:07:48.358 03:34:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:48.358 03:34:40 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:48.358 ************************************ 00:07:48.358 END TEST bdev_verify_big_io 00:07:48.358 ************************************ 00:07:48.358 03:34:40 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:48.358 03:34:40 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:48.358 03:34:40 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:48.358 03:34:40 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:48.358 ************************************ 00:07:48.358 START TEST bdev_write_zeroes 00:07:48.358 ************************************ 00:07:48.358 03:34:40 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:48.358 [2024-10-01 03:34:40.719107] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:48.358 [2024-10-01 03:34:40.719225] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62588 ] 00:07:48.358 [2024-10-01 03:34:40.868422] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.617 [2024-10-01 03:34:41.024892] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.181 Running I/O for 1 seconds... 00:07:50.115 68544.00 IOPS, 267.75 MiB/s 00:07:50.115 Latency(us) 00:07:50.115 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:50.115 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.115 Nvme0n1 : 1.02 9772.76 38.17 0.00 0.00 13072.26 5948.65 24097.08 00:07:50.115 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.115 Nvme1n1p1 : 1.02 9762.97 38.14 0.00 0.00 13071.75 8872.57 24097.08 00:07:50.115 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.115 Nvme1n1p2 : 1.02 9753.70 38.10 0.00 0.00 13048.68 8973.39 24601.21 00:07:50.115 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.115 Nvme2n1 : 1.02 9744.83 38.07 0.00 0.00 13035.09 9023.80 24702.03 00:07:50.115 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.115 Nvme2n2 : 1.03 9736.29 38.03 0.00 0.00 13000.50 9074.22 24197.91 00:07:50.115 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.115 Nvme2n3 : 1.03 9727.38 38.00 0.00 0.00 12989.69 9175.04 23794.61 00:07:50.115 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:50.115 Nvme3n1 : 1.03 9718.96 37.96 0.00 0.00 12978.93 8368.44 23794.61 00:07:50.115 =================================================================================================================== 00:07:50.115 Total : 68216.89 266.47 0.00 0.00 13028.13 5948.65 24702.03 00:07:50.704 00:07:50.704 real 0m2.585s 00:07:50.704 user 0m2.283s 00:07:50.704 sys 0m0.189s 00:07:50.704 03:34:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:50.704 ************************************ 00:07:50.704 END TEST bdev_write_zeroes 00:07:50.704 ************************************ 00:07:50.704 03:34:43 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:50.962 03:34:43 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.962 03:34:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:50.962 03:34:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:50.962 03:34:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:50.962 ************************************ 00:07:50.962 START TEST bdev_json_nonenclosed 00:07:50.962 ************************************ 00:07:50.962 03:34:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:50.962 [2024-10-01 03:34:43.363924] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:50.962 [2024-10-01 03:34:43.364071] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62641 ] 00:07:51.219 [2024-10-01 03:34:43.520730] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.219 [2024-10-01 03:34:43.660575] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.219 [2024-10-01 03:34:43.660645] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:51.219 [2024-10-01 03:34:43.660658] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:51.219 [2024-10-01 03:34:43.660666] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.476 00:07:51.476 real 0m0.585s 00:07:51.476 user 0m0.369s 00:07:51.476 sys 0m0.111s 00:07:51.476 03:34:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:51.476 03:34:43 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:51.476 ************************************ 00:07:51.476 END TEST bdev_json_nonenclosed 00:07:51.476 ************************************ 00:07:51.476 03:34:43 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:51.476 03:34:43 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:07:51.476 03:34:43 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:51.476 03:34:43 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:51.476 ************************************ 00:07:51.476 START TEST bdev_json_nonarray 00:07:51.476 ************************************ 00:07:51.476 03:34:43 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:51.476 [2024-10-01 03:34:43.993744] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:51.476 [2024-10-01 03:34:43.993873] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62661 ] 00:07:51.733 [2024-10-01 03:34:44.143872] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.990 [2024-10-01 03:34:44.319935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.990 [2024-10-01 03:34:44.320037] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:51.990 [2024-10-01 03:34:44.320055] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:51.990 [2024-10-01 03:34:44.320064] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:52.247 00:07:52.247 real 0m0.680s 00:07:52.247 user 0m0.465s 00:07:52.247 sys 0m0.110s 00:07:52.247 03:34:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:52.247 03:34:44 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:52.247 ************************************ 00:07:52.247 END TEST bdev_json_nonarray 00:07:52.247 ************************************ 00:07:52.247 03:34:44 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:07:52.247 03:34:44 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:07:52.247 03:34:44 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:52.247 03:34:44 blockdev_nvme_gpt -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:52.247 03:34:44 blockdev_nvme_gpt -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:52.247 03:34:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:52.247 ************************************ 00:07:52.247 START TEST bdev_gpt_uuid 00:07:52.248 ************************************ 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1125 -- # bdev_gpt_uuid 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62692 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 62692 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@831 -- # '[' -z 62692 ']' 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.248 03:34:44 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:52.248 [2024-10-01 03:34:44.741569] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:07:52.248 [2024-10-01 03:34:44.741690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62692 ] 00:07:52.505 [2024-10-01 03:34:44.890342] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.762 [2024-10-01 03:34:45.066584] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.328 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.328 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # return 0 00:07:53.328 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:53.328 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.328 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:53.586 Some configs were skipped because the RPC state that can call them passed over. 00:07:53.586 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.587 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:07:53.587 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.587 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:53.587 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.587 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:53.587 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.587 03:34:45 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:07:53.587 { 00:07:53.587 "name": "Nvme1n1p1", 00:07:53.587 "aliases": [ 00:07:53.587 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:53.587 ], 00:07:53.587 "product_name": "GPT Disk", 00:07:53.587 "block_size": 4096, 00:07:53.587 "num_blocks": 655104, 00:07:53.587 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:53.587 "assigned_rate_limits": { 00:07:53.587 "rw_ios_per_sec": 0, 00:07:53.587 "rw_mbytes_per_sec": 0, 00:07:53.587 "r_mbytes_per_sec": 0, 00:07:53.587 "w_mbytes_per_sec": 0 00:07:53.587 }, 00:07:53.587 "claimed": false, 00:07:53.587 "zoned": false, 00:07:53.587 "supported_io_types": { 00:07:53.587 "read": true, 00:07:53.587 "write": true, 00:07:53.587 "unmap": true, 00:07:53.587 "flush": true, 00:07:53.587 "reset": true, 00:07:53.587 "nvme_admin": false, 00:07:53.587 "nvme_io": false, 00:07:53.587 "nvme_io_md": false, 00:07:53.587 "write_zeroes": true, 00:07:53.587 "zcopy": false, 00:07:53.587 "get_zone_info": false, 00:07:53.587 "zone_management": false, 00:07:53.587 "zone_append": false, 00:07:53.587 "compare": true, 00:07:53.587 "compare_and_write": false, 00:07:53.587 "abort": true, 00:07:53.587 "seek_hole": false, 00:07:53.587 "seek_data": false, 00:07:53.587 "copy": true, 00:07:53.587 "nvme_iov_md": false 00:07:53.587 }, 00:07:53.587 "driver_specific": { 00:07:53.587 "gpt": { 00:07:53.587 "base_bdev": "Nvme1n1", 00:07:53.587 "offset_blocks": 256, 00:07:53.587 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:53.587 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:53.587 "partition_name": "SPDK_TEST_first" 00:07:53.587 } 00:07:53.587 } 00:07:53.587 } 00:07:53.587 ]' 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:07:53.587 { 00:07:53.587 "name": "Nvme1n1p2", 00:07:53.587 "aliases": [ 00:07:53.587 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:53.587 ], 00:07:53.587 "product_name": "GPT Disk", 00:07:53.587 "block_size": 4096, 00:07:53.587 "num_blocks": 655103, 00:07:53.587 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:53.587 "assigned_rate_limits": { 00:07:53.587 "rw_ios_per_sec": 0, 00:07:53.587 "rw_mbytes_per_sec": 0, 00:07:53.587 "r_mbytes_per_sec": 0, 00:07:53.587 "w_mbytes_per_sec": 0 00:07:53.587 }, 00:07:53.587 "claimed": false, 00:07:53.587 "zoned": false, 00:07:53.587 "supported_io_types": { 00:07:53.587 "read": true, 00:07:53.587 "write": true, 00:07:53.587 "unmap": true, 00:07:53.587 "flush": true, 00:07:53.587 "reset": true, 00:07:53.587 "nvme_admin": false, 00:07:53.587 "nvme_io": false, 00:07:53.587 "nvme_io_md": false, 00:07:53.587 "write_zeroes": true, 00:07:53.587 "zcopy": false, 00:07:53.587 "get_zone_info": false, 00:07:53.587 "zone_management": false, 00:07:53.587 "zone_append": false, 00:07:53.587 "compare": true, 00:07:53.587 "compare_and_write": false, 00:07:53.587 "abort": true, 00:07:53.587 "seek_hole": false, 00:07:53.587 "seek_data": false, 00:07:53.587 "copy": true, 00:07:53.587 "nvme_iov_md": false 00:07:53.587 }, 00:07:53.587 "driver_specific": { 00:07:53.587 "gpt": { 00:07:53.587 "base_bdev": "Nvme1n1", 00:07:53.587 "offset_blocks": 655360, 00:07:53.587 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:53.587 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:53.587 "partition_name": "SPDK_TEST_second" 00:07:53.587 } 00:07:53.587 } 00:07:53.587 } 00:07:53.587 ]' 00:07:53.587 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 62692 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@950 -- # '[' -z 62692 ']' 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # kill -0 62692 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # uname 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62692 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.845 killing process with pid 62692 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62692' 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@969 -- # kill 62692 00:07:53.845 03:34:46 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@974 -- # wait 62692 00:07:55.741 00:07:55.741 real 0m3.170s 00:07:55.741 user 0m3.316s 00:07:55.741 sys 0m0.358s 00:07:55.741 03:34:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:55.741 03:34:47 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:55.741 ************************************ 00:07:55.741 END TEST bdev_gpt_uuid 00:07:55.741 ************************************ 00:07:55.741 03:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:07:55.741 03:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:07:55.741 03:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:07:55.741 03:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:55.741 03:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:55.741 03:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:55.741 03:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:55.741 03:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:55.741 03:34:47 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:55.741 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:56.001 Waiting for block devices as requested 00:07:56.001 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:56.001 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:56.001 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:56.001 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:01.266 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:01.266 03:34:53 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:08:01.266 03:34:53 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:08:01.525 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:08:01.525 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:08:01.525 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:08:01.525 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:08:01.525 03:34:53 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:08:01.525 00:08:01.525 real 0m58.200s 00:08:01.525 user 1m13.875s 00:08:01.525 sys 0m8.262s 00:08:01.525 03:34:53 blockdev_nvme_gpt -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:01.525 03:34:53 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:01.525 ************************************ 00:08:01.525 END TEST blockdev_nvme_gpt 00:08:01.525 ************************************ 00:08:01.525 03:34:53 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:01.525 03:34:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:01.525 03:34:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:01.525 03:34:53 -- common/autotest_common.sh@10 -- # set +x 00:08:01.525 ************************************ 00:08:01.525 START TEST nvme 00:08:01.525 ************************************ 00:08:01.525 03:34:53 nvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:08:01.525 * Looking for test storage... 00:08:01.525 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:01.525 03:34:53 nvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:08:01.525 03:34:53 nvme -- common/autotest_common.sh@1681 -- # lcov --version 00:08:01.525 03:34:53 nvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:08:01.525 03:34:54 nvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:08:01.525 03:34:54 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:01.525 03:34:54 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:01.525 03:34:54 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:01.525 03:34:54 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:08:01.526 03:34:54 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:08:01.526 03:34:54 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:08:01.526 03:34:54 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:08:01.526 03:34:54 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:08:01.526 03:34:54 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:08:01.526 03:34:54 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:08:01.526 03:34:54 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:01.526 03:34:54 nvme -- scripts/common.sh@344 -- # case "$op" in 00:08:01.526 03:34:54 nvme -- scripts/common.sh@345 -- # : 1 00:08:01.526 03:34:54 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:01.526 03:34:54 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:01.526 03:34:54 nvme -- scripts/common.sh@365 -- # decimal 1 00:08:01.526 03:34:54 nvme -- scripts/common.sh@353 -- # local d=1 00:08:01.526 03:34:54 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:01.526 03:34:54 nvme -- scripts/common.sh@355 -- # echo 1 00:08:01.526 03:34:54 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:08:01.526 03:34:54 nvme -- scripts/common.sh@366 -- # decimal 2 00:08:01.526 03:34:54 nvme -- scripts/common.sh@353 -- # local d=2 00:08:01.526 03:34:54 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:01.526 03:34:54 nvme -- scripts/common.sh@355 -- # echo 2 00:08:01.526 03:34:54 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:08:01.526 03:34:54 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:01.526 03:34:54 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:01.526 03:34:54 nvme -- scripts/common.sh@368 -- # return 0 00:08:01.526 03:34:54 nvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:01.526 03:34:54 nvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:08:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.526 --rc genhtml_branch_coverage=1 00:08:01.526 --rc genhtml_function_coverage=1 00:08:01.526 --rc genhtml_legend=1 00:08:01.526 --rc geninfo_all_blocks=1 00:08:01.526 --rc geninfo_unexecuted_blocks=1 00:08:01.526 00:08:01.526 ' 00:08:01.526 03:34:54 nvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:08:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.526 --rc genhtml_branch_coverage=1 00:08:01.526 --rc genhtml_function_coverage=1 00:08:01.526 --rc genhtml_legend=1 00:08:01.526 --rc geninfo_all_blocks=1 00:08:01.526 --rc geninfo_unexecuted_blocks=1 00:08:01.526 00:08:01.526 ' 00:08:01.526 03:34:54 nvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:08:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.526 --rc genhtml_branch_coverage=1 00:08:01.526 --rc genhtml_function_coverage=1 00:08:01.526 --rc genhtml_legend=1 00:08:01.526 --rc geninfo_all_blocks=1 00:08:01.526 --rc geninfo_unexecuted_blocks=1 00:08:01.526 00:08:01.526 ' 00:08:01.526 03:34:54 nvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:08:01.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:01.526 --rc genhtml_branch_coverage=1 00:08:01.526 --rc genhtml_function_coverage=1 00:08:01.526 --rc genhtml_legend=1 00:08:01.526 --rc geninfo_all_blocks=1 00:08:01.526 --rc geninfo_unexecuted_blocks=1 00:08:01.526 00:08:01.526 ' 00:08:01.526 03:34:54 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:02.093 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:02.392 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:02.392 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:02.667 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:02.667 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:02.667 03:34:55 nvme -- nvme/nvme.sh@79 -- # uname 00:08:02.667 03:34:55 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:08:02.667 03:34:55 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:08:02.667 03:34:55 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:08:02.667 03:34:55 nvme -- common/autotest_common.sh@1082 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:08:02.667 03:34:55 nvme -- common/autotest_common.sh@1068 -- # _randomize_va_space=2 00:08:02.667 03:34:55 nvme -- common/autotest_common.sh@1069 -- # echo 0 00:08:02.667 03:34:55 nvme -- common/autotest_common.sh@1071 -- # stubpid=63327 00:08:02.667 Waiting for stub to ready for secondary processes... 00:08:02.667 03:34:55 nvme -- common/autotest_common.sh@1072 -- # echo Waiting for stub to ready for secondary processes... 00:08:02.667 03:34:55 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:02.667 03:34:55 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63327 ]] 00:08:02.667 03:34:55 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:02.667 03:34:55 nvme -- common/autotest_common.sh@1070 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:08:02.667 [2024-10-01 03:34:55.122421] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:08:02.667 [2024-10-01 03:34:55.122551] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:08:03.600 03:34:56 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:03.600 03:34:56 nvme -- common/autotest_common.sh@1075 -- # [[ -e /proc/63327 ]] 00:08:03.600 03:34:56 nvme -- common/autotest_common.sh@1076 -- # sleep 1s 00:08:03.600 [2024-10-01 03:34:56.087583] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.859 [2024-10-01 03:34:56.268674] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.859 [2024-10-01 03:34:56.268821] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.859 [2024-10-01 03:34:56.268844] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:08:03.859 [2024-10-01 03:34:56.280752] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:08:03.859 [2024-10-01 03:34:56.280785] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:03.859 [2024-10-01 03:34:56.292774] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:08:03.859 [2024-10-01 03:34:56.292990] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:08:03.859 [2024-10-01 03:34:56.295247] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:03.859 [2024-10-01 03:34:56.295444] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:08:03.859 [2024-10-01 03:34:56.295537] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:08:03.859 [2024-10-01 03:34:56.297157] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:03.859 [2024-10-01 03:34:56.297355] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:08:03.859 [2024-10-01 03:34:56.297451] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:08:03.859 [2024-10-01 03:34:56.299203] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:08:03.859 [2024-10-01 03:34:56.299372] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:08:03.859 [2024-10-01 03:34:56.299456] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:08:03.859 [2024-10-01 03:34:56.299524] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:08:03.859 [2024-10-01 03:34:56.299598] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:08:04.798 03:34:57 nvme -- common/autotest_common.sh@1073 -- # '[' -e /var/run/spdk_stub0 ']' 00:08:04.798 done. 00:08:04.798 03:34:57 nvme -- common/autotest_common.sh@1078 -- # echo done. 00:08:04.798 03:34:57 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:04.798 03:34:57 nvme -- common/autotest_common.sh@1101 -- # '[' 10 -le 1 ']' 00:08:04.798 03:34:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:04.798 03:34:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:04.798 ************************************ 00:08:04.798 START TEST nvme_reset 00:08:04.798 ************************************ 00:08:04.798 03:34:57 nvme.nvme_reset -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:08:05.056 Initializing NVMe Controllers 00:08:05.056 Skipping QEMU NVMe SSD at 0000:00:10.0 00:08:05.056 Skipping QEMU NVMe SSD at 0000:00:11.0 00:08:05.056 Skipping QEMU NVMe SSD at 0000:00:13.0 00:08:05.056 Skipping QEMU NVMe SSD at 0000:00:12.0 00:08:05.056 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:08:05.056 00:08:05.056 real 0m0.257s 00:08:05.056 user 0m0.059s 00:08:05.056 sys 0m0.128s 00:08:05.056 03:34:57 nvme.nvme_reset -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.056 03:34:57 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:08:05.056 ************************************ 00:08:05.056 END TEST nvme_reset 00:08:05.056 ************************************ 00:08:05.056 03:34:57 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:08:05.056 03:34:57 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:05.056 03:34:57 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.056 03:34:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:05.056 ************************************ 00:08:05.056 START TEST nvme_identify 00:08:05.056 ************************************ 00:08:05.056 03:34:57 nvme.nvme_identify -- common/autotest_common.sh@1125 -- # nvme_identify 00:08:05.056 03:34:57 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:08:05.056 03:34:57 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:08:05.056 03:34:57 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:08:05.056 03:34:57 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:08:05.056 03:34:57 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:05.056 03:34:57 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:08:05.056 03:34:57 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:05.056 03:34:57 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:05.056 03:34:57 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:05.057 03:34:57 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:05.057 03:34:57 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:05.057 03:34:57 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:08:05.317 [2024-10-01 03:34:57.634318] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0] process 63360 terminated unexpected 00:08:05.317 ===================================================== 00:08:05.317 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:05.317 ===================================================== 00:08:05.317 Controller Capabilities/Features 00:08:05.317 ================================ 00:08:05.317 Vendor ID: 1b36 00:08:05.317 Subsystem Vendor ID: 1af4 00:08:05.317 Serial Number: 12340 00:08:05.317 Model Number: QEMU NVMe Ctrl 00:08:05.317 Firmware Version: 8.0.0 00:08:05.317 Recommended Arb Burst: 6 00:08:05.317 IEEE OUI Identifier: 00 54 52 00:08:05.317 Multi-path I/O 00:08:05.317 May have multiple subsystem ports: No 00:08:05.317 May have multiple controllers: No 00:08:05.317 Associated with SR-IOV VF: No 00:08:05.317 Max Data Transfer Size: 524288 00:08:05.317 Max Number of Namespaces: 256 00:08:05.317 Max Number of I/O Queues: 64 00:08:05.317 NVMe Specification Version (VS): 1.4 00:08:05.317 NVMe Specification Version (Identify): 1.4 00:08:05.317 Maximum Queue Entries: 2048 00:08:05.317 Contiguous Queues Required: Yes 00:08:05.317 Arbitration Mechanisms Supported 00:08:05.317 Weighted Round Robin: Not Supported 00:08:05.317 Vendor Specific: Not Supported 00:08:05.317 Reset Timeout: 7500 ms 00:08:05.317 Doorbell Stride: 4 bytes 00:08:05.317 NVM Subsystem Reset: Not Supported 00:08:05.317 Command Sets Supported 00:08:05.317 NVM Command Set: Supported 00:08:05.317 Boot Partition: Not Supported 00:08:05.317 Memory Page Size Minimum: 4096 bytes 00:08:05.317 Memory Page Size Maximum: 65536 bytes 00:08:05.317 Persistent Memory Region: Not Supported 00:08:05.317 Optional Asynchronous Events Supported 00:08:05.317 Namespace Attribute Notices: Supported 00:08:05.317 Firmware Activation Notices: Not Supported 00:08:05.317 ANA Change Notices: Not Supported 00:08:05.317 PLE Aggregate Log Change Notices: Not Supported 00:08:05.317 LBA Status Info Alert Notices: Not Supported 00:08:05.317 EGE Aggregate Log Change Notices: Not Supported 00:08:05.317 Normal NVM Subsystem Shutdown event: Not Supported 00:08:05.317 Zone Descriptor Change Notices: Not Supported 00:08:05.318 Discovery Log Change Notices: Not Supported 00:08:05.318 Controller Attributes 00:08:05.318 128-bit Host Identifier: Not Supported 00:08:05.318 Non-Operational Permissive Mode: Not Supported 00:08:05.318 NVM Sets: Not Supported 00:08:05.318 Read Recovery Levels: Not Supported 00:08:05.318 Endurance Groups: Not Supported 00:08:05.318 Predictable Latency Mode: Not Supported 00:08:05.318 Traffic Based Keep ALive: Not Supported 00:08:05.318 Namespace Granularity: Not Supported 00:08:05.318 SQ Associations: Not Supported 00:08:05.318 UUID List: Not Supported 00:08:05.318 Multi-Domain Subsystem: Not Supported 00:08:05.318 Fixed Capacity Management: Not Supported 00:08:05.318 Variable Capacity Management: Not Supported 00:08:05.318 Delete Endurance Group: Not Supported 00:08:05.318 Delete NVM Set: Not Supported 00:08:05.318 Extended LBA Formats Supported: Supported 00:08:05.318 Flexible Data Placement Supported: Not Supported 00:08:05.318 00:08:05.318 Controller Memory Buffer Support 00:08:05.318 ================================ 00:08:05.318 Supported: No 00:08:05.318 00:08:05.318 Persistent Memory Region Support 00:08:05.318 ================================ 00:08:05.318 Supported: No 00:08:05.318 00:08:05.318 Admin Command Set Attributes 00:08:05.318 ============================ 00:08:05.318 Security Send/Receive: Not Supported 00:08:05.318 Format NVM: Supported 00:08:05.318 Firmware Activate/Download: Not Supported 00:08:05.318 Namespace Management: Supported 00:08:05.318 Device Self-Test: Not Supported 00:08:05.318 Directives: Supported 00:08:05.318 NVMe-MI: Not Supported 00:08:05.318 Virtualization Management: Not Supported 00:08:05.318 Doorbell Buffer Config: Supported 00:08:05.318 Get LBA Status Capability: Not Supported 00:08:05.318 Command & Feature Lockdown Capability: Not Supported 00:08:05.318 Abort Command Limit: 4 00:08:05.318 Async Event Request Limit: 4 00:08:05.318 Number of Firmware Slots: N/A 00:08:05.318 Firmware Slot 1 Read-Only: N/A 00:08:05.318 Firmware Activation Without Reset: N/A 00:08:05.318 Multiple Update Detection Support: N/A 00:08:05.318 Firmware Update Granularity: No Information Provided 00:08:05.318 Per-Namespace SMART Log: Yes 00:08:05.318 Asymmetric Namespace Access Log Page: Not Supported 00:08:05.318 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:05.318 Command Effects Log Page: Supported 00:08:05.318 Get Log Page Extended Data: Supported 00:08:05.318 Telemetry Log Pages: Not Supported 00:08:05.318 Persistent Event Log Pages: Not Supported 00:08:05.318 Supported Log Pages Log Page: May Support 00:08:05.318 Commands Supported & Effects Log Page: Not Supported 00:08:05.318 Feature Identifiers & Effects Log Page:May Support 00:08:05.318 NVMe-MI Commands & Effects Log Page: May Support 00:08:05.318 Data Area 4 for Telemetry Log: Not Supported 00:08:05.318 Error Log Page Entries Supported: 1 00:08:05.318 Keep Alive: Not Supported 00:08:05.318 00:08:05.318 NVM Command Set Attributes 00:08:05.318 ========================== 00:08:05.318 Submission Queue Entry Size 00:08:05.318 Max: 64 00:08:05.318 Min: 64 00:08:05.318 Completion Queue Entry Size 00:08:05.318 Max: 16 00:08:05.318 Min: 16 00:08:05.318 Number of Namespaces: 256 00:08:05.318 Compare Command: Supported 00:08:05.318 Write Uncorrectable Command: Not Supported 00:08:05.318 Dataset Management Command: Supported 00:08:05.318 Write Zeroes Command: Supported 00:08:05.318 Set Features Save Field: Supported 00:08:05.318 Reservations: Not Supported 00:08:05.318 Timestamp: Supported 00:08:05.318 Copy: Supported 00:08:05.318 Volatile Write Cache: Present 00:08:05.318 Atomic Write Unit (Normal): 1 00:08:05.318 Atomic Write Unit (PFail): 1 00:08:05.318 Atomic Compare & Write Unit: 1 00:08:05.318 Fused Compare & Write: Not Supported 00:08:05.318 Scatter-Gather List 00:08:05.318 SGL Command Set: Supported 00:08:05.318 SGL Keyed: Not Supported 00:08:05.318 SGL Bit Bucket Descriptor: Not Supported 00:08:05.318 SGL Metadata Pointer: Not Supported 00:08:05.318 Oversized SGL: Not Supported 00:08:05.318 SGL Metadata Address: Not Supported 00:08:05.318 SGL Offset: Not Supported 00:08:05.318 Transport SGL Data Block: Not Supported 00:08:05.318 Replay Protected Memory Block: Not Supported 00:08:05.318 00:08:05.318 Firmware Slot Information 00:08:05.318 ========================= 00:08:05.318 Active slot: 1 00:08:05.318 Slot 1 Firmware Revision: 1.0 00:08:05.318 00:08:05.318 00:08:05.318 Commands Supported and Effects 00:08:05.318 ============================== 00:08:05.318 Admin Commands 00:08:05.318 -------------- 00:08:05.318 Delete I/O Submission Queue (00h): Supported 00:08:05.318 Create I/O Submission Queue (01h): Supported 00:08:05.318 Get Log Page (02h): Supported 00:08:05.318 Delete I/O Completion Queue (04h): Supported 00:08:05.318 Create I/O Completion Queue (05h): Supported 00:08:05.318 Identify (06h): Supported 00:08:05.318 Abort (08h): Supported 00:08:05.318 Set Features (09h): Supported 00:08:05.318 Get Features (0Ah): Supported 00:08:05.318 Asynchronous Event Request (0Ch): Supported 00:08:05.318 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:05.318 Directive Send (19h): Supported 00:08:05.318 Directive Receive (1Ah): Supported 00:08:05.318 Virtualization Management (1Ch): Supported 00:08:05.318 Doorbell Buffer Config (7Ch): Supported 00:08:05.318 Format NVM (80h): Supported LBA-Change 00:08:05.318 I/O Commands 00:08:05.318 ------------ 00:08:05.318 Flush (00h): Supported LBA-Change 00:08:05.318 Write (01h): Supported LBA-Change 00:08:05.318 Read (02h): Supported 00:08:05.318 Compare (05h): Supported 00:08:05.318 Write Zeroes (08h): Supported LBA-Change 00:08:05.318 Dataset Management (09h): Supported LBA-Change 00:08:05.318 Unknown (0Ch): Supported 00:08:05.318 Unknown (12h): Supported 00:08:05.318 Copy (19h): Supported LBA-Change 00:08:05.318 Unknown (1Dh): Supported LBA-Change 00:08:05.318 00:08:05.318 Error Log 00:08:05.318 ========= 00:08:05.318 00:08:05.318 Arbitration 00:08:05.318 =========== 00:08:05.318 Arbitration Burst: no limit 00:08:05.318 00:08:05.318 Power Management 00:08:05.318 ================ 00:08:05.318 Number of Power States: 1 00:08:05.318 Current Power State: Power State #0 00:08:05.318 Power State #0: 00:08:05.318 Max Power: 25.00 W 00:08:05.318 Non-Operational State: Operational 00:08:05.318 Entry Latency: 16 microseconds 00:08:05.318 Exit Latency: 4 microseconds 00:08:05.318 Relative Read Throughput: 0 00:08:05.318 Relative Read Latency: 0 00:08:05.318 Relative Write Throughput: 0 00:08:05.318 Relative Write Latency: 0 00:08:05.318 Idle Power[2024-10-01 03:34:57.635236] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0] process 63360 terminated unexpected 00:08:05.318 : Not Reported 00:08:05.318 Active Power: Not Reported 00:08:05.318 Non-Operational Permissive Mode: Not Supported 00:08:05.318 00:08:05.318 Health Information 00:08:05.318 ================== 00:08:05.318 Critical Warnings: 00:08:05.318 Available Spare Space: OK 00:08:05.318 Temperature: OK 00:08:05.318 Device Reliability: OK 00:08:05.318 Read Only: No 00:08:05.319 Volatile Memory Backup: OK 00:08:05.319 Current Temperature: 323 Kelvin (50 Celsius) 00:08:05.319 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:05.319 Available Spare: 0% 00:08:05.319 Available Spare Threshold: 0% 00:08:05.319 Life Percentage Used: 0% 00:08:05.319 Data Units Read: 642 00:08:05.319 Data Units Written: 570 00:08:05.319 Host Read Commands: 33535 00:08:05.319 Host Write Commands: 33321 00:08:05.319 Controller Busy Time: 0 minutes 00:08:05.319 Power Cycles: 0 00:08:05.319 Power On Hours: 0 hours 00:08:05.319 Unsafe Shutdowns: 0 00:08:05.319 Unrecoverable Media Errors: 0 00:08:05.319 Lifetime Error Log Entries: 0 00:08:05.319 Warning Temperature Time: 0 minutes 00:08:05.319 Critical Temperature Time: 0 minutes 00:08:05.319 00:08:05.319 Number of Queues 00:08:05.319 ================ 00:08:05.319 Number of I/O Submission Queues: 64 00:08:05.319 Number of I/O Completion Queues: 64 00:08:05.319 00:08:05.319 ZNS Specific Controller Data 00:08:05.319 ============================ 00:08:05.319 Zone Append Size Limit: 0 00:08:05.319 00:08:05.319 00:08:05.319 Active Namespaces 00:08:05.319 ================= 00:08:05.319 Namespace ID:1 00:08:05.319 Error Recovery Timeout: Unlimited 00:08:05.319 Command Set Identifier: NVM (00h) 00:08:05.319 Deallocate: Supported 00:08:05.319 Deallocated/Unwritten Error: Supported 00:08:05.319 Deallocated Read Value: All 0x00 00:08:05.319 Deallocate in Write Zeroes: Not Supported 00:08:05.319 Deallocated Guard Field: 0xFFFF 00:08:05.319 Flush: Supported 00:08:05.319 Reservation: Not Supported 00:08:05.319 Metadata Transferred as: Separate Metadata Buffer 00:08:05.319 Namespace Sharing Capabilities: Private 00:08:05.319 Size (in LBAs): 1548666 (5GiB) 00:08:05.319 Capacity (in LBAs): 1548666 (5GiB) 00:08:05.319 Utilization (in LBAs): 1548666 (5GiB) 00:08:05.319 Thin Provisioning: Not Supported 00:08:05.319 Per-NS Atomic Units: No 00:08:05.319 Maximum Single Source Range Length: 128 00:08:05.319 Maximum Copy Length: 128 00:08:05.319 Maximum Source Range Count: 128 00:08:05.319 NGUID/EUI64 Never Reused: No 00:08:05.319 Namespace Write Protected: No 00:08:05.319 Number of LBA Formats: 8 00:08:05.319 Current LBA Format: LBA Format #07 00:08:05.319 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:05.319 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:05.319 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:05.319 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:05.319 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:05.319 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:05.319 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:05.319 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:05.319 00:08:05.319 NVM Specific Namespace Data 00:08:05.319 =========================== 00:08:05.319 Logical Block Storage Tag Mask: 0 00:08:05.319 Protection Information Capabilities: 00:08:05.319 16b Guard Protection Information Storage Tag Support: No 00:08:05.319 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:05.319 Storage Tag Check Read Support: No 00:08:05.319 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.319 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.319 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.319 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.319 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.319 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.319 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.319 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.319 ===================================================== 00:08:05.319 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:05.319 ===================================================== 00:08:05.319 Controller Capabilities/Features 00:08:05.319 ================================ 00:08:05.319 Vendor ID: 1b36 00:08:05.319 Subsystem Vendor ID: 1af4 00:08:05.319 Serial Number: 12341 00:08:05.319 Model Number: QEMU NVMe Ctrl 00:08:05.319 Firmware Version: 8.0.0 00:08:05.319 Recommended Arb Burst: 6 00:08:05.319 IEEE OUI Identifier: 00 54 52 00:08:05.319 Multi-path I/O 00:08:05.319 May have multiple subsystem ports: No 00:08:05.319 May have multiple controllers: No 00:08:05.319 Associated with SR-IOV VF: No 00:08:05.319 Max Data Transfer Size: 524288 00:08:05.319 Max Number of Namespaces: 256 00:08:05.319 Max Number of I/O Queues: 64 00:08:05.319 NVMe Specification Version (VS): 1.4 00:08:05.319 NVMe Specification Version (Identify): 1.4 00:08:05.319 Maximum Queue Entries: 2048 00:08:05.319 Contiguous Queues Required: Yes 00:08:05.319 Arbitration Mechanisms Supported 00:08:05.319 Weighted Round Robin: Not Supported 00:08:05.319 Vendor Specific: Not Supported 00:08:05.319 Reset Timeout: 7500 ms 00:08:05.319 Doorbell Stride: 4 bytes 00:08:05.319 NVM Subsystem Reset: Not Supported 00:08:05.319 Command Sets Supported 00:08:05.319 NVM Command Set: Supported 00:08:05.319 Boot Partition: Not Supported 00:08:05.319 Memory Page Size Minimum: 4096 bytes 00:08:05.319 Memory Page Size Maximum: 65536 bytes 00:08:05.319 Persistent Memory Region: Not Supported 00:08:05.319 Optional Asynchronous Events Supported 00:08:05.319 Namespace Attribute Notices: Supported 00:08:05.319 Firmware Activation Notices: Not Supported 00:08:05.319 ANA Change Notices: Not Supported 00:08:05.319 PLE Aggregate Log Change Notices: Not Supported 00:08:05.319 LBA Status Info Alert Notices: Not Supported 00:08:05.319 EGE Aggregate Log Change Notices: Not Supported 00:08:05.319 Normal NVM Subsystem Shutdown event: Not Supported 00:08:05.319 Zone Descriptor Change Notices: Not Supported 00:08:05.319 Discovery Log Change Notices: Not Supported 00:08:05.319 Controller Attributes 00:08:05.319 128-bit Host Identifier: Not Supported 00:08:05.319 Non-Operational Permissive Mode: Not Supported 00:08:05.319 NVM Sets: Not Supported 00:08:05.319 Read Recovery Levels: Not Supported 00:08:05.319 Endurance Groups: Not Supported 00:08:05.319 Predictable Latency Mode: Not Supported 00:08:05.319 Traffic Based Keep ALive: Not Supported 00:08:05.319 Namespace Granularity: Not Supported 00:08:05.319 SQ Associations: Not Supported 00:08:05.319 UUID List: Not Supported 00:08:05.319 Multi-Domain Subsystem: Not Supported 00:08:05.319 Fixed Capacity Management: Not Supported 00:08:05.319 Variable Capacity Management: Not Supported 00:08:05.319 Delete Endurance Group: Not Supported 00:08:05.319 Delete NVM Set: Not Supported 00:08:05.319 Extended LBA Formats Supported: Supported 00:08:05.319 Flexible Data Placement Supported: Not Supported 00:08:05.319 00:08:05.319 Controller Memory Buffer Support 00:08:05.319 ================================ 00:08:05.319 Supported: No 00:08:05.319 00:08:05.319 Persistent Memory Region Support 00:08:05.319 ================================ 00:08:05.319 Supported: No 00:08:05.319 00:08:05.319 Admin Command Set Attributes 00:08:05.319 ============================ 00:08:05.319 Security Send/Receive: Not Supported 00:08:05.319 Format NVM: Supported 00:08:05.319 Firmware Activate/Download: Not Supported 00:08:05.319 Namespace Management: Supported 00:08:05.319 Device Self-Test: Not Supported 00:08:05.319 Directives: Supported 00:08:05.319 NVMe-MI: Not Supported 00:08:05.319 Virtualization Management: Not Supported 00:08:05.319 Doorbell Buffer Config: Supported 00:08:05.319 Get LBA Status Capability: Not Supported 00:08:05.319 Command & Feature Lockdown Capability: Not Supported 00:08:05.319 Abort Command Limit: 4 00:08:05.319 Async Event Request Limit: 4 00:08:05.319 Number of Firmware Slots: N/A 00:08:05.319 Firmware Slot 1 Read-Only: N/A 00:08:05.319 Firmware Activation Without Reset: N/A 00:08:05.319 Multiple Update Detection Support: N/A 00:08:05.319 Firmware Update Granularity: No Information Provided 00:08:05.319 Per-Namespace SMART Log: Yes 00:08:05.319 Asymmetric Namespace Access Log Page: Not Supported 00:08:05.319 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:05.319 Command Effects Log Page: Supported 00:08:05.319 Get Log Page Extended Data: Supported 00:08:05.319 Telemetry Log Pages: Not Supported 00:08:05.319 Persistent Event Log Pages: Not Supported 00:08:05.319 Supported Log Pages Log Page: May Support 00:08:05.320 Commands Supported & Effects Log Page: Not Supported 00:08:05.320 Feature Identifiers & Effects Log Page:May Support 00:08:05.320 NVMe-MI Commands & Effects Log Page: May Support 00:08:05.320 Data Area 4 for Telemetry Log: Not Supported 00:08:05.320 Error Log Page Entries Supported: 1 00:08:05.320 Keep Alive: Not Supported 00:08:05.320 00:08:05.320 NVM Command Set Attributes 00:08:05.320 ========================== 00:08:05.320 Submission Queue Entry Size 00:08:05.320 Max: 64 00:08:05.320 Min: 64 00:08:05.320 Completion Queue Entry Size 00:08:05.320 Max: 16 00:08:05.320 Min: 16 00:08:05.320 Number of Namespaces: 256 00:08:05.320 Compare Command: Supported 00:08:05.320 Write Uncorrectable Command: Not Supported 00:08:05.320 Dataset Management Command: Supported 00:08:05.320 Write Zeroes Command: Supported 00:08:05.320 Set Features Save Field: Supported 00:08:05.320 Reservations: Not Supported 00:08:05.320 Timestamp: Supported 00:08:05.320 Copy: Supported 00:08:05.320 Volatile Write Cache: Present 00:08:05.320 Atomic Write Unit (Normal): 1 00:08:05.320 Atomic Write Unit (PFail): 1 00:08:05.320 Atomic Compare & Write Unit: 1 00:08:05.320 Fused Compare & Write: Not Supported 00:08:05.320 Scatter-Gather List 00:08:05.320 SGL Command Set: Supported 00:08:05.320 SGL Keyed: Not Supported 00:08:05.320 SGL Bit Bucket Descriptor: Not Supported 00:08:05.320 SGL Metadata Pointer: Not Supported 00:08:05.320 Oversized SGL: Not Supported 00:08:05.320 SGL Metadata Address: Not Supported 00:08:05.320 SGL Offset: Not Supported 00:08:05.320 Transport SGL Data Block: Not Supported 00:08:05.320 Replay Protected Memory Block: Not Supported 00:08:05.320 00:08:05.320 Firmware Slot Information 00:08:05.320 ========================= 00:08:05.320 Active slot: 1 00:08:05.320 Slot 1 Firmware Revision: 1.0 00:08:05.320 00:08:05.320 00:08:05.320 Commands Supported and Effects 00:08:05.320 ============================== 00:08:05.320 Admin Commands 00:08:05.320 -------------- 00:08:05.320 Delete I/O Submission Queue (00h): Supported 00:08:05.320 Create I/O Submission Queue (01h): Supported 00:08:05.320 Get Log Page (02h): Supported 00:08:05.320 Delete I/O Completion Queue (04h): Supported 00:08:05.320 Create I/O Completion Queue (05h): Supported 00:08:05.320 Identify (06h): Supported 00:08:05.320 Abort (08h): Supported 00:08:05.320 Set Features (09h): Supported 00:08:05.320 Get Features (0Ah): Supported 00:08:05.320 Asynchronous Event Request (0Ch): Supported 00:08:05.320 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:05.320 Directive Send (19h): Supported 00:08:05.320 Directive Receive (1Ah): Supported 00:08:05.320 Virtualization Management (1Ch): Supported 00:08:05.320 Doorbell Buffer Config (7Ch): Supported 00:08:05.320 Format NVM (80h): Supported LBA-Change 00:08:05.320 I/O Commands 00:08:05.320 ------------ 00:08:05.320 Flush (00h): Supported LBA-Change 00:08:05.320 Write (01h): Supported LBA-Change 00:08:05.320 Read (02h): Supported 00:08:05.320 Compare (05h): Supported 00:08:05.320 Write Zeroes (08h): Supported LBA-Change 00:08:05.320 Dataset Management (09h): Supported LBA-Change 00:08:05.320 Unknown (0Ch): Supported 00:08:05.320 Unknown (12h): Supported 00:08:05.320 Copy (19h): Supported LBA-Change 00:08:05.320 Unknown (1Dh): Supported LBA-Change 00:08:05.320 00:08:05.320 Error Log 00:08:05.320 ========= 00:08:05.320 00:08:05.320 Arbitration 00:08:05.320 =========== 00:08:05.320 Arbitration Burst: no limit 00:08:05.320 00:08:05.320 Power Management 00:08:05.320 ================ 00:08:05.320 Number of Power States: 1 00:08:05.320 Current Power State: Power State #0 00:08:05.320 Power State #0: 00:08:05.320 Max Power: 25.00 W 00:08:05.320 Non-Operational State: Operational 00:08:05.320 Entry Latency: 16 microseconds 00:08:05.320 Exit Latency: 4 microseconds 00:08:05.320 Relative Read Throughput: 0 00:08:05.320 Relative Read Latency: 0 00:08:05.320 Relative Write Throughput: 0 00:08:05.320 Relative Write Latency: 0 00:08:05.320 Idle Power: Not Reported 00:08:05.320 Active Power: Not Reported 00:08:05.320 Non-Operational Permissive Mode: Not Supported 00:08:05.320 00:08:05.320 Health Information 00:08:05.320 ================== 00:08:05.320 Critical Warnings: 00:08:05.320 Available Spare Space: OK 00:08:05.320 Temperature: [2024-10-01 03:34:57.635888] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0] process 63360 terminated unexpected 00:08:05.320 OK 00:08:05.320 Device Reliability: OK 00:08:05.320 Read Only: No 00:08:05.320 Volatile Memory Backup: OK 00:08:05.320 Current Temperature: 323 Kelvin (50 Celsius) 00:08:05.320 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:05.320 Available Spare: 0% 00:08:05.320 Available Spare Threshold: 0% 00:08:05.320 Life Percentage Used: 0% 00:08:05.320 Data Units Read: 971 00:08:05.320 Data Units Written: 837 00:08:05.320 Host Read Commands: 50328 00:08:05.320 Host Write Commands: 49092 00:08:05.320 Controller Busy Time: 0 minutes 00:08:05.320 Power Cycles: 0 00:08:05.320 Power On Hours: 0 hours 00:08:05.320 Unsafe Shutdowns: 0 00:08:05.320 Unrecoverable Media Errors: 0 00:08:05.320 Lifetime Error Log Entries: 0 00:08:05.320 Warning Temperature Time: 0 minutes 00:08:05.320 Critical Temperature Time: 0 minutes 00:08:05.320 00:08:05.320 Number of Queues 00:08:05.320 ================ 00:08:05.320 Number of I/O Submission Queues: 64 00:08:05.320 Number of I/O Completion Queues: 64 00:08:05.320 00:08:05.320 ZNS Specific Controller Data 00:08:05.320 ============================ 00:08:05.320 Zone Append Size Limit: 0 00:08:05.320 00:08:05.320 00:08:05.320 Active Namespaces 00:08:05.320 ================= 00:08:05.320 Namespace ID:1 00:08:05.320 Error Recovery Timeout: Unlimited 00:08:05.320 Command Set Identifier: NVM (00h) 00:08:05.320 Deallocate: Supported 00:08:05.320 Deallocated/Unwritten Error: Supported 00:08:05.320 Deallocated Read Value: All 0x00 00:08:05.320 Deallocate in Write Zeroes: Not Supported 00:08:05.320 Deallocated Guard Field: 0xFFFF 00:08:05.320 Flush: Supported 00:08:05.320 Reservation: Not Supported 00:08:05.320 Namespace Sharing Capabilities: Private 00:08:05.320 Size (in LBAs): 1310720 (5GiB) 00:08:05.320 Capacity (in LBAs): 1310720 (5GiB) 00:08:05.320 Utilization (in LBAs): 1310720 (5GiB) 00:08:05.320 Thin Provisioning: Not Supported 00:08:05.320 Per-NS Atomic Units: No 00:08:05.320 Maximum Single Source Range Length: 128 00:08:05.320 Maximum Copy Length: 128 00:08:05.320 Maximum Source Range Count: 128 00:08:05.320 NGUID/EUI64 Never Reused: No 00:08:05.320 Namespace Write Protected: No 00:08:05.320 Number of LBA Formats: 8 00:08:05.320 Current LBA Format: LBA Format #04 00:08:05.320 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:05.320 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:05.320 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:05.320 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:05.320 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:05.320 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:05.320 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:05.320 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:05.320 00:08:05.320 NVM Specific Namespace Data 00:08:05.320 =========================== 00:08:05.320 Logical Block Storage Tag Mask: 0 00:08:05.321 Protection Information Capabilities: 00:08:05.321 16b Guard Protection Information Storage Tag Support: No 00:08:05.321 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:05.321 Storage Tag Check Read Support: No 00:08:05.321 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.321 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.321 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.321 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.321 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.321 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.321 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.321 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.321 ===================================================== 00:08:05.321 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:05.321 ===================================================== 00:08:05.321 Controller Capabilities/Features 00:08:05.321 ================================ 00:08:05.321 Vendor ID: 1b36 00:08:05.321 Subsystem Vendor ID: 1af4 00:08:05.321 Serial Number: 12343 00:08:05.321 Model Number: QEMU NVMe Ctrl 00:08:05.321 Firmware Version: 8.0.0 00:08:05.321 Recommended Arb Burst: 6 00:08:05.321 IEEE OUI Identifier: 00 54 52 00:08:05.321 Multi-path I/O 00:08:05.321 May have multiple subsystem ports: No 00:08:05.321 May have multiple controllers: Yes 00:08:05.321 Associated with SR-IOV VF: No 00:08:05.321 Max Data Transfer Size: 524288 00:08:05.321 Max Number of Namespaces: 256 00:08:05.321 Max Number of I/O Queues: 64 00:08:05.321 NVMe Specification Version (VS): 1.4 00:08:05.321 NVMe Specification Version (Identify): 1.4 00:08:05.321 Maximum Queue Entries: 2048 00:08:05.321 Contiguous Queues Required: Yes 00:08:05.321 Arbitration Mechanisms Supported 00:08:05.321 Weighted Round Robin: Not Supported 00:08:05.321 Vendor Specific: Not Supported 00:08:05.321 Reset Timeout: 7500 ms 00:08:05.321 Doorbell Stride: 4 bytes 00:08:05.321 NVM Subsystem Reset: Not Supported 00:08:05.321 Command Sets Supported 00:08:05.321 NVM Command Set: Supported 00:08:05.321 Boot Partition: Not Supported 00:08:05.321 Memory Page Size Minimum: 4096 bytes 00:08:05.321 Memory Page Size Maximum: 65536 bytes 00:08:05.321 Persistent Memory Region: Not Supported 00:08:05.321 Optional Asynchronous Events Supported 00:08:05.321 Namespace Attribute Notices: Supported 00:08:05.321 Firmware Activation Notices: Not Supported 00:08:05.321 ANA Change Notices: Not Supported 00:08:05.321 PLE Aggregate Log Change Notices: Not Supported 00:08:05.321 LBA Status Info Alert Notices: Not Supported 00:08:05.321 EGE Aggregate Log Change Notices: Not Supported 00:08:05.321 Normal NVM Subsystem Shutdown event: Not Supported 00:08:05.321 Zone Descriptor Change Notices: Not Supported 00:08:05.321 Discovery Log Change Notices: Not Supported 00:08:05.321 Controller Attributes 00:08:05.321 128-bit Host Identifier: Not Supported 00:08:05.321 Non-Operational Permissive Mode: Not Supported 00:08:05.321 NVM Sets: Not Supported 00:08:05.321 Read Recovery Levels: Not Supported 00:08:05.321 Endurance Groups: Supported 00:08:05.321 Predictable Latency Mode: Not Supported 00:08:05.321 Traffic Based Keep ALive: Not Supported 00:08:05.321 Namespace Granularity: Not Supported 00:08:05.321 SQ Associations: Not Supported 00:08:05.321 UUID List: Not Supported 00:08:05.321 Multi-Domain Subsystem: Not Supported 00:08:05.321 Fixed Capacity Management: Not Supported 00:08:05.321 Variable Capacity Management: Not Supported 00:08:05.321 Delete Endurance Group: Not Supported 00:08:05.321 Delete NVM Set: Not Supported 00:08:05.321 Extended LBA Formats Supported: Supported 00:08:05.321 Flexible Data Placement Supported: Supported 00:08:05.321 00:08:05.321 Controller Memory Buffer Support 00:08:05.321 ================================ 00:08:05.321 Supported: No 00:08:05.321 00:08:05.321 Persistent Memory Region Support 00:08:05.321 ================================ 00:08:05.321 Supported: No 00:08:05.321 00:08:05.321 Admin Command Set Attributes 00:08:05.321 ============================ 00:08:05.321 Security Send/Receive: Not Supported 00:08:05.321 Format NVM: Supported 00:08:05.321 Firmware Activate/Download: Not Supported 00:08:05.321 Namespace Management: Supported 00:08:05.321 Device Self-Test: Not Supported 00:08:05.321 Directives: Supported 00:08:05.321 NVMe-MI: Not Supported 00:08:05.321 Virtualization Management: Not Supported 00:08:05.321 Doorbell Buffer Config: Supported 00:08:05.321 Get LBA Status Capability: Not Supported 00:08:05.321 Command & Feature Lockdown Capability: Not Supported 00:08:05.321 Abort Command Limit: 4 00:08:05.321 Async Event Request Limit: 4 00:08:05.321 Number of Firmware Slots: N/A 00:08:05.321 Firmware Slot 1 Read-Only: N/A 00:08:05.321 Firmware Activation Without Reset: N/A 00:08:05.321 Multiple Update Detection Support: N/A 00:08:05.321 Firmware Update Granularity: No Information Provided 00:08:05.321 Per-Namespace SMART Log: Yes 00:08:05.321 Asymmetric Namespace Access Log Page: Not Supported 00:08:05.321 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:05.321 Command Effects Log Page: Supported 00:08:05.321 Get Log Page Extended Data: Supported 00:08:05.321 Telemetry Log Pages: Not Supported 00:08:05.321 Persistent Event Log Pages: Not Supported 00:08:05.321 Supported Log Pages Log Page: May Support 00:08:05.321 Commands Supported & Effects Log Page: Not Supported 00:08:05.321 Feature Identifiers & Effects Log Page:May Support 00:08:05.321 NVMe-MI Commands & Effects Log Page: May Support 00:08:05.321 Data Area 4 for Telemetry Log: Not Supported 00:08:05.321 Error Log Page Entries Supported: 1 00:08:05.321 Keep Alive: Not Supported 00:08:05.321 00:08:05.321 NVM Command Set Attributes 00:08:05.321 ========================== 00:08:05.321 Submission Queue Entry Size 00:08:05.321 Max: 64 00:08:05.321 Min: 64 00:08:05.321 Completion Queue Entry Size 00:08:05.321 Max: 16 00:08:05.321 Min: 16 00:08:05.321 Number of Namespaces: 256 00:08:05.321 Compare Command: Supported 00:08:05.321 Write Uncorrectable Command: Not Supported 00:08:05.321 Dataset Management Command: Supported 00:08:05.321 Write Zeroes Command: Supported 00:08:05.321 Set Features Save Field: Supported 00:08:05.321 Reservations: Not Supported 00:08:05.321 Timestamp: Supported 00:08:05.321 Copy: Supported 00:08:05.321 Volatile Write Cache: Present 00:08:05.321 Atomic Write Unit (Normal): 1 00:08:05.321 Atomic Write Unit (PFail): 1 00:08:05.321 Atomic Compare & Write Unit: 1 00:08:05.321 Fused Compare & Write: Not Supported 00:08:05.321 Scatter-Gather List 00:08:05.321 SGL Command Set: Supported 00:08:05.321 SGL Keyed: Not Supported 00:08:05.321 SGL Bit Bucket Descriptor: Not Supported 00:08:05.321 SGL Metadata Pointer: Not Supported 00:08:05.321 Oversized SGL: Not Supported 00:08:05.321 SGL Metadata Address: Not Supported 00:08:05.321 SGL Offset: Not Supported 00:08:05.321 Transport SGL Data Block: Not Supported 00:08:05.321 Replay Protected Memory Block: Not Supported 00:08:05.321 00:08:05.321 Firmware Slot Information 00:08:05.321 ========================= 00:08:05.321 Active slot: 1 00:08:05.321 Slot 1 Firmware Revision: 1.0 00:08:05.321 00:08:05.321 00:08:05.321 Commands Supported and Effects 00:08:05.321 ============================== 00:08:05.321 Admin Commands 00:08:05.321 -------------- 00:08:05.321 Delete I/O Submission Queue (00h): Supported 00:08:05.321 Create I/O Submission Queue (01h): Supported 00:08:05.321 Get Log Page (02h): Supported 00:08:05.321 Delete I/O Completion Queue (04h): Supported 00:08:05.321 Create I/O Completion Queue (05h): Supported 00:08:05.321 Identify (06h): Supported 00:08:05.321 Abort (08h): Supported 00:08:05.321 Set Features (09h): Supported 00:08:05.321 Get Features (0Ah): Supported 00:08:05.321 Asynchronous Event Request (0Ch): Supported 00:08:05.321 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:05.321 Directive Send (19h): Supported 00:08:05.321 Directive Receive (1Ah): Supported 00:08:05.321 Virtualization Management (1Ch): Supported 00:08:05.321 Doorbell Buffer Config (7Ch): Supported 00:08:05.321 Format NVM (80h): Supported LBA-Change 00:08:05.321 I/O Commands 00:08:05.321 ------------ 00:08:05.321 Flush (00h): Supported LBA-Change 00:08:05.322 Write (01h): Supported LBA-Change 00:08:05.322 Read (02h): Supported 00:08:05.322 Compare (05h): Supported 00:08:05.322 Write Zeroes (08h): Supported LBA-Change 00:08:05.322 Dataset Management (09h): Supported LBA-Change 00:08:05.322 Unknown (0Ch): Supported 00:08:05.322 Unknown (12h): Supported 00:08:05.322 Copy (19h): Supported LBA-Change 00:08:05.322 Unknown (1Dh): Supported LBA-Change 00:08:05.322 00:08:05.322 Error Log 00:08:05.322 ========= 00:08:05.322 00:08:05.322 Arbitration 00:08:05.322 =========== 00:08:05.322 Arbitration Burst: no limit 00:08:05.322 00:08:05.322 Power Management 00:08:05.322 ================ 00:08:05.322 Number of Power States: 1 00:08:05.322 Current Power State: Power State #0 00:08:05.322 Power State #0: 00:08:05.322 Max Power: 25.00 W 00:08:05.322 Non-Operational State: Operational 00:08:05.322 Entry Latency: 16 microseconds 00:08:05.322 Exit Latency: 4 microseconds 00:08:05.322 Relative Read Throughput: 0 00:08:05.322 Relative Read Latency: 0 00:08:05.322 Relative Write Throughput: 0 00:08:05.322 Relative Write Latency: 0 00:08:05.322 Idle Power: Not Reported 00:08:05.322 Active Power: Not Reported 00:08:05.322 Non-Operational Permissive Mode: Not Supported 00:08:05.322 00:08:05.322 Health Information 00:08:05.322 ================== 00:08:05.322 Critical Warnings: 00:08:05.322 Available Spare Space: OK 00:08:05.322 Temperature: OK 00:08:05.322 Device Reliability: OK 00:08:05.322 Read Only: No 00:08:05.322 Volatile Memory Backup: OK 00:08:05.322 Current Temperature: 323 Kelvin (50 Celsius) 00:08:05.322 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:05.322 Available Spare: 0% 00:08:05.322 Available Spare Threshold: 0% 00:08:05.322 Life Percentage Used: 0% 00:08:05.322 Data Units Read: 1065 00:08:05.322 Data Units Written: 994 00:08:05.322 Host Read Commands: 37377 00:08:05.322 Host Write Commands: 36800 00:08:05.322 Controller Busy Time: 0 minutes 00:08:05.322 Power Cycles: 0 00:08:05.322 Power On Hours: 0 hours 00:08:05.322 Unsafe Shutdowns: 0 00:08:05.322 Unrecoverable Media Errors: 0 00:08:05.322 Lifetime Error Log Entries: 0 00:08:05.322 Warning Temperature Time: 0 minutes 00:08:05.322 Critical Temperature Time: 0 minutes 00:08:05.322 00:08:05.322 Number of Queues 00:08:05.322 ================ 00:08:05.322 Number of I/O Submission Queues: 64 00:08:05.322 Number of I/O Completion Queues: 64 00:08:05.322 00:08:05.322 ZNS Specific Controller Data 00:08:05.322 ============================ 00:08:05.322 Zone Append Size Limit: 0 00:08:05.322 00:08:05.322 00:08:05.322 Active Namespaces 00:08:05.322 ================= 00:08:05.322 Namespace ID:1 00:08:05.322 Error Recovery Timeout: Unlimited 00:08:05.322 Command Set Identifier: NVM (00h) 00:08:05.322 Deallocate: Supported 00:08:05.322 Deallocated/Unwritten Error: Supported 00:08:05.322 Deallocated Read Value: All 0x00 00:08:05.322 Deallocate in Write Zeroes: Not Supported 00:08:05.322 Deallocated Guard Field: 0xFFFF 00:08:05.322 Flush: Supported 00:08:05.322 Reservation: Not Supported 00:08:05.322 Namespace Sharing Capabilities: Multiple Controllers 00:08:05.322 Size (in LBAs): 262144 (1GiB) 00:08:05.322 Capacity (in LBAs): 262144 (1GiB) 00:08:05.322 Utilization (in LBAs): 262144 (1GiB) 00:08:05.322 Thin Provisioning: Not Supported 00:08:05.322 Per-NS Atomic Units: No 00:08:05.322 Maximum Single Source Range Length: 128 00:08:05.322 Maximum Copy Length: 128 00:08:05.322 Maximum Source Range Count: 128 00:08:05.322 NGUID/EUI64 Never Reused: No 00:08:05.322 Namespace Write Protected: No 00:08:05.322 Endurance group ID: 1 00:08:05.322 Number of LBA Formats: 8 00:08:05.322 Current LBA Format: LBA Format #04 00:08:05.322 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:05.322 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:05.322 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:05.322 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:05.322 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:05.322 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:05.322 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:05.322 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:05.322 00:08:05.322 Get Feature FDP: 00:08:05.322 ================ 00:08:05.322 Enabled: Yes 00:08:05.322 FDP configuration index: 0 00:08:05.322 00:08:05.322 FDP configurations log page 00:08:05.322 =========================== 00:08:05.322 Number of FDP configurations: 1 00:08:05.322 Version: 0 00:08:05.322 Size: 112 00:08:05.322 FDP Configuration Descriptor: 0 00:08:05.322 Descriptor Size: 96 00:08:05.322 Reclaim Group Identifier format: 2 00:08:05.322 FDP Volatile Write Cache: Not Present 00:08:05.322 FDP Configuration: Valid 00:08:05.322 Vendor Specific Size: 0 00:08:05.322 Number of Reclaim Groups: 2 00:08:05.322 Number of Recalim Unit Handles: 8 00:08:05.322 Max Placement Identifiers: 128 00:08:05.322 Number of Namespaces Suppprted: 256 00:08:05.322 Reclaim unit Nominal Size: 6000000 bytes 00:08:05.322 Estimated Reclaim Unit Time Limit: Not Reported 00:08:05.322 RUH Desc #000: RUH Type: Initially Isolated 00:08:05.322 RUH Desc #001: RUH Type: Initially Isolated 00:08:05.322 RUH Desc #002: RUH Type: Initially Isolated 00:08:05.322 RUH Desc #003: RUH Type: Initially Isolated 00:08:05.322 RUH Desc #004: RUH Type: Initially Isolated 00:08:05.322 RUH Desc #005: RUH Type: Initially Isolated 00:08:05.322 RUH Desc #006: RUH Type: Initially Isolated 00:08:05.322 RUH Desc #007: RUH Type: Initially Isolated 00:08:05.322 00:08:05.322 FDP reclaim unit handle usage log page 00:08:05.322 ====================================== 00:08:05.322 Number of Reclaim Unit Handles: 8 00:08:05.322 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:05.322 RUH Usage Desc #001: RUH Attributes: Unused 00:08:05.322 RUH Usage Desc #002: RUH Attributes: Unused 00:08:05.322 RUH Usage Desc #003: RUH Attributes: Unused 00:08:05.322 RUH Usage Desc #004: RUH Attributes: Unused 00:08:05.322 RUH Usage Desc #005: RUH Attributes: Unused 00:08:05.322 RUH Usage Desc #006: RUH Attributes: Unused 00:08:05.322 RUH Usage Desc #007: RUH Attributes: Unused 00:08:05.322 00:08:05.322 FDP statistics log page 00:08:05.322 ======================= 00:08:05.322 Host bytes with metadata written: 610902016 00:08:05.322 Medi[2024-10-01 03:34:57.637226] nvme_ctrlr.c:3628:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0] process 63360 terminated unexpected 00:08:05.322 a bytes with metadata written: 610983936 00:08:05.322 Media bytes erased: 0 00:08:05.322 00:08:05.322 FDP events log page 00:08:05.322 =================== 00:08:05.322 Number of FDP events: 0 00:08:05.322 00:08:05.322 NVM Specific Namespace Data 00:08:05.322 =========================== 00:08:05.322 Logical Block Storage Tag Mask: 0 00:08:05.322 Protection Information Capabilities: 00:08:05.322 16b Guard Protection Information Storage Tag Support: No 00:08:05.322 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:05.322 Storage Tag Check Read Support: No 00:08:05.322 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.322 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.322 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.322 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.323 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.323 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.323 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.323 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.323 ===================================================== 00:08:05.323 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:05.323 ===================================================== 00:08:05.323 Controller Capabilities/Features 00:08:05.323 ================================ 00:08:05.323 Vendor ID: 1b36 00:08:05.323 Subsystem Vendor ID: 1af4 00:08:05.323 Serial Number: 12342 00:08:05.323 Model Number: QEMU NVMe Ctrl 00:08:05.323 Firmware Version: 8.0.0 00:08:05.323 Recommended Arb Burst: 6 00:08:05.323 IEEE OUI Identifier: 00 54 52 00:08:05.323 Multi-path I/O 00:08:05.323 May have multiple subsystem ports: No 00:08:05.323 May have multiple controllers: No 00:08:05.323 Associated with SR-IOV VF: No 00:08:05.323 Max Data Transfer Size: 524288 00:08:05.323 Max Number of Namespaces: 256 00:08:05.323 Max Number of I/O Queues: 64 00:08:05.323 NVMe Specification Version (VS): 1.4 00:08:05.323 NVMe Specification Version (Identify): 1.4 00:08:05.323 Maximum Queue Entries: 2048 00:08:05.323 Contiguous Queues Required: Yes 00:08:05.323 Arbitration Mechanisms Supported 00:08:05.323 Weighted Round Robin: Not Supported 00:08:05.323 Vendor Specific: Not Supported 00:08:05.323 Reset Timeout: 7500 ms 00:08:05.323 Doorbell Stride: 4 bytes 00:08:05.323 NVM Subsystem Reset: Not Supported 00:08:05.323 Command Sets Supported 00:08:05.323 NVM Command Set: Supported 00:08:05.323 Boot Partition: Not Supported 00:08:05.323 Memory Page Size Minimum: 4096 bytes 00:08:05.323 Memory Page Size Maximum: 65536 bytes 00:08:05.323 Persistent Memory Region: Not Supported 00:08:05.323 Optional Asynchronous Events Supported 00:08:05.323 Namespace Attribute Notices: Supported 00:08:05.323 Firmware Activation Notices: Not Supported 00:08:05.323 ANA Change Notices: Not Supported 00:08:05.323 PLE Aggregate Log Change Notices: Not Supported 00:08:05.323 LBA Status Info Alert Notices: Not Supported 00:08:05.323 EGE Aggregate Log Change Notices: Not Supported 00:08:05.323 Normal NVM Subsystem Shutdown event: Not Supported 00:08:05.323 Zone Descriptor Change Notices: Not Supported 00:08:05.323 Discovery Log Change Notices: Not Supported 00:08:05.323 Controller Attributes 00:08:05.323 128-bit Host Identifier: Not Supported 00:08:05.323 Non-Operational Permissive Mode: Not Supported 00:08:05.323 NVM Sets: Not Supported 00:08:05.323 Read Recovery Levels: Not Supported 00:08:05.323 Endurance Groups: Not Supported 00:08:05.323 Predictable Latency Mode: Not Supported 00:08:05.323 Traffic Based Keep ALive: Not Supported 00:08:05.323 Namespace Granularity: Not Supported 00:08:05.323 SQ Associations: Not Supported 00:08:05.323 UUID List: Not Supported 00:08:05.323 Multi-Domain Subsystem: Not Supported 00:08:05.323 Fixed Capacity Management: Not Supported 00:08:05.323 Variable Capacity Management: Not Supported 00:08:05.323 Delete Endurance Group: Not Supported 00:08:05.323 Delete NVM Set: Not Supported 00:08:05.323 Extended LBA Formats Supported: Supported 00:08:05.323 Flexible Data Placement Supported: Not Supported 00:08:05.323 00:08:05.323 Controller Memory Buffer Support 00:08:05.323 ================================ 00:08:05.323 Supported: No 00:08:05.323 00:08:05.323 Persistent Memory Region Support 00:08:05.323 ================================ 00:08:05.323 Supported: No 00:08:05.323 00:08:05.323 Admin Command Set Attributes 00:08:05.323 ============================ 00:08:05.323 Security Send/Receive: Not Supported 00:08:05.323 Format NVM: Supported 00:08:05.323 Firmware Activate/Download: Not Supported 00:08:05.323 Namespace Management: Supported 00:08:05.323 Device Self-Test: Not Supported 00:08:05.323 Directives: Supported 00:08:05.323 NVMe-MI: Not Supported 00:08:05.323 Virtualization Management: Not Supported 00:08:05.323 Doorbell Buffer Config: Supported 00:08:05.323 Get LBA Status Capability: Not Supported 00:08:05.323 Command & Feature Lockdown Capability: Not Supported 00:08:05.323 Abort Command Limit: 4 00:08:05.323 Async Event Request Limit: 4 00:08:05.323 Number of Firmware Slots: N/A 00:08:05.323 Firmware Slot 1 Read-Only: N/A 00:08:05.323 Firmware Activation Without Reset: N/A 00:08:05.323 Multiple Update Detection Support: N/A 00:08:05.323 Firmware Update Granularity: No Information Provided 00:08:05.323 Per-Namespace SMART Log: Yes 00:08:05.323 Asymmetric Namespace Access Log Page: Not Supported 00:08:05.323 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:05.323 Command Effects Log Page: Supported 00:08:05.323 Get Log Page Extended Data: Supported 00:08:05.323 Telemetry Log Pages: Not Supported 00:08:05.323 Persistent Event Log Pages: Not Supported 00:08:05.323 Supported Log Pages Log Page: May Support 00:08:05.323 Commands Supported & Effects Log Page: Not Supported 00:08:05.323 Feature Identifiers & Effects Log Page:May Support 00:08:05.323 NVMe-MI Commands & Effects Log Page: May Support 00:08:05.323 Data Area 4 for Telemetry Log: Not Supported 00:08:05.323 Error Log Page Entries Supported: 1 00:08:05.323 Keep Alive: Not Supported 00:08:05.323 00:08:05.323 NVM Command Set Attributes 00:08:05.323 ========================== 00:08:05.323 Submission Queue Entry Size 00:08:05.323 Max: 64 00:08:05.323 Min: 64 00:08:05.323 Completion Queue Entry Size 00:08:05.323 Max: 16 00:08:05.323 Min: 16 00:08:05.323 Number of Namespaces: 256 00:08:05.323 Compare Command: Supported 00:08:05.323 Write Uncorrectable Command: Not Supported 00:08:05.323 Dataset Management Command: Supported 00:08:05.323 Write Zeroes Command: Supported 00:08:05.323 Set Features Save Field: Supported 00:08:05.323 Reservations: Not Supported 00:08:05.323 Timestamp: Supported 00:08:05.323 Copy: Supported 00:08:05.323 Volatile Write Cache: Present 00:08:05.323 Atomic Write Unit (Normal): 1 00:08:05.323 Atomic Write Unit (PFail): 1 00:08:05.323 Atomic Compare & Write Unit: 1 00:08:05.323 Fused Compare & Write: Not Supported 00:08:05.323 Scatter-Gather List 00:08:05.323 SGL Command Set: Supported 00:08:05.323 SGL Keyed: Not Supported 00:08:05.323 SGL Bit Bucket Descriptor: Not Supported 00:08:05.323 SGL Metadata Pointer: Not Supported 00:08:05.323 Oversized SGL: Not Supported 00:08:05.323 SGL Metadata Address: Not Supported 00:08:05.323 SGL Offset: Not Supported 00:08:05.323 Transport SGL Data Block: Not Supported 00:08:05.323 Replay Protected Memory Block: Not Supported 00:08:05.323 00:08:05.323 Firmware Slot Information 00:08:05.323 ========================= 00:08:05.324 Active slot: 1 00:08:05.324 Slot 1 Firmware Revision: 1.0 00:08:05.324 00:08:05.324 00:08:05.324 Commands Supported and Effects 00:08:05.324 ============================== 00:08:05.324 Admin Commands 00:08:05.324 -------------- 00:08:05.324 Delete I/O Submission Queue (00h): Supported 00:08:05.324 Create I/O Submission Queue (01h): Supported 00:08:05.324 Get Log Page (02h): Supported 00:08:05.324 Delete I/O Completion Queue (04h): Supported 00:08:05.324 Create I/O Completion Queue (05h): Supported 00:08:05.324 Identify (06h): Supported 00:08:05.324 Abort (08h): Supported 00:08:05.324 Set Features (09h): Supported 00:08:05.324 Get Features (0Ah): Supported 00:08:05.324 Asynchronous Event Request (0Ch): Supported 00:08:05.324 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:05.324 Directive Send (19h): Supported 00:08:05.324 Directive Receive (1Ah): Supported 00:08:05.324 Virtualization Management (1Ch): Supported 00:08:05.324 Doorbell Buffer Config (7Ch): Supported 00:08:05.324 Format NVM (80h): Supported LBA-Change 00:08:05.324 I/O Commands 00:08:05.324 ------------ 00:08:05.324 Flush (00h): Supported LBA-Change 00:08:05.324 Write (01h): Supported LBA-Change 00:08:05.324 Read (02h): Supported 00:08:05.324 Compare (05h): Supported 00:08:05.324 Write Zeroes (08h): Supported LBA-Change 00:08:05.324 Dataset Management (09h): Supported LBA-Change 00:08:05.324 Unknown (0Ch): Supported 00:08:05.324 Unknown (12h): Supported 00:08:05.324 Copy (19h): Supported LBA-Change 00:08:05.324 Unknown (1Dh): Supported LBA-Change 00:08:05.324 00:08:05.324 Error Log 00:08:05.324 ========= 00:08:05.324 00:08:05.324 Arbitration 00:08:05.324 =========== 00:08:05.324 Arbitration Burst: no limit 00:08:05.324 00:08:05.324 Power Management 00:08:05.324 ================ 00:08:05.324 Number of Power States: 1 00:08:05.324 Current Power State: Power State #0 00:08:05.324 Power State #0: 00:08:05.324 Max Power: 25.00 W 00:08:05.324 Non-Operational State: Operational 00:08:05.324 Entry Latency: 16 microseconds 00:08:05.324 Exit Latency: 4 microseconds 00:08:05.324 Relative Read Throughput: 0 00:08:05.324 Relative Read Latency: 0 00:08:05.324 Relative Write Throughput: 0 00:08:05.324 Relative Write Latency: 0 00:08:05.324 Idle Power: Not Reported 00:08:05.324 Active Power: Not Reported 00:08:05.324 Non-Operational Permissive Mode: Not Supported 00:08:05.324 00:08:05.324 Health Information 00:08:05.324 ================== 00:08:05.324 Critical Warnings: 00:08:05.324 Available Spare Space: OK 00:08:05.324 Temperature: OK 00:08:05.324 Device Reliability: OK 00:08:05.324 Read Only: No 00:08:05.324 Volatile Memory Backup: OK 00:08:05.324 Current Temperature: 323 Kelvin (50 Celsius) 00:08:05.324 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:05.324 Available Spare: 0% 00:08:05.324 Available Spare Threshold: 0% 00:08:05.324 Life Percentage Used: 0% 00:08:05.324 Data Units Read: 2215 00:08:05.324 Data Units Written: 2002 00:08:05.324 Host Read Commands: 104185 00:08:05.324 Host Write Commands: 102454 00:08:05.324 Controller Busy Time: 0 minutes 00:08:05.324 Power Cycles: 0 00:08:05.324 Power On Hours: 0 hours 00:08:05.324 Unsafe Shutdowns: 0 00:08:05.324 Unrecoverable Media Errors: 0 00:08:05.324 Lifetime Error Log Entries: 0 00:08:05.324 Warning Temperature Time: 0 minutes 00:08:05.324 Critical Temperature Time: 0 minutes 00:08:05.324 00:08:05.324 Number of Queues 00:08:05.324 ================ 00:08:05.324 Number of I/O Submission Queues: 64 00:08:05.324 Number of I/O Completion Queues: 64 00:08:05.324 00:08:05.324 ZNS Specific Controller Data 00:08:05.324 ============================ 00:08:05.324 Zone Append Size Limit: 0 00:08:05.324 00:08:05.324 00:08:05.324 Active Namespaces 00:08:05.324 ================= 00:08:05.324 Namespace ID:1 00:08:05.324 Error Recovery Timeout: Unlimited 00:08:05.324 Command Set Identifier: NVM (00h) 00:08:05.324 Deallocate: Supported 00:08:05.324 Deallocated/Unwritten Error: Supported 00:08:05.324 Deallocated Read Value: All 0x00 00:08:05.324 Deallocate in Write Zeroes: Not Supported 00:08:05.324 Deallocated Guard Field: 0xFFFF 00:08:05.324 Flush: Supported 00:08:05.324 Reservation: Not Supported 00:08:05.324 Namespace Sharing Capabilities: Private 00:08:05.324 Size (in LBAs): 1048576 (4GiB) 00:08:05.324 Capacity (in LBAs): 1048576 (4GiB) 00:08:05.324 Utilization (in LBAs): 1048576 (4GiB) 00:08:05.324 Thin Provisioning: Not Supported 00:08:05.324 Per-NS Atomic Units: No 00:08:05.324 Maximum Single Source Range Length: 128 00:08:05.324 Maximum Copy Length: 128 00:08:05.324 Maximum Source Range Count: 128 00:08:05.324 NGUID/EUI64 Never Reused: No 00:08:05.324 Namespace Write Protected: No 00:08:05.324 Number of LBA Formats: 8 00:08:05.324 Current LBA Format: LBA Format #04 00:08:05.324 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:05.324 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:05.324 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:05.324 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:05.324 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:05.324 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:05.324 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:05.324 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:05.324 00:08:05.324 NVM Specific Namespace Data 00:08:05.324 =========================== 00:08:05.324 Logical Block Storage Tag Mask: 0 00:08:05.324 Protection Information Capabilities: 00:08:05.324 16b Guard Protection Information Storage Tag Support: No 00:08:05.324 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:05.324 Storage Tag Check Read Support: No 00:08:05.324 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.324 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.324 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.324 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.324 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.324 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.324 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.324 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.324 Namespace ID:2 00:08:05.324 Error Recovery Timeout: Unlimited 00:08:05.324 Command Set Identifier: NVM (00h) 00:08:05.324 Deallocate: Supported 00:08:05.324 Deallocated/Unwritten Error: Supported 00:08:05.324 Deallocated Read Value: All 0x00 00:08:05.324 Deallocate in Write Zeroes: Not Supported 00:08:05.324 Deallocated Guard Field: 0xFFFF 00:08:05.324 Flush: Supported 00:08:05.324 Reservation: Not Supported 00:08:05.324 Namespace Sharing Capabilities: Private 00:08:05.324 Size (in LBAs): 1048576 (4GiB) 00:08:05.324 Capacity (in LBAs): 1048576 (4GiB) 00:08:05.325 Utilization (in LBAs): 1048576 (4GiB) 00:08:05.325 Thin Provisioning: Not Supported 00:08:05.325 Per-NS Atomic Units: No 00:08:05.325 Maximum Single Source Range Length: 128 00:08:05.325 Maximum Copy Length: 128 00:08:05.325 Maximum Source Range Count: 128 00:08:05.325 NGUID/EUI64 Never Reused: No 00:08:05.325 Namespace Write Protected: No 00:08:05.325 Number of LBA Formats: 8 00:08:05.325 Current LBA Format: LBA Format #04 00:08:05.325 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:05.325 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:05.325 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:05.325 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:05.325 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:05.325 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:05.325 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:05.325 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:05.325 00:08:05.325 NVM Specific Namespace Data 00:08:05.325 =========================== 00:08:05.325 Logical Block Storage Tag Mask: 0 00:08:05.325 Protection Information Capabilities: 00:08:05.325 16b Guard Protection Information Storage Tag Support: No 00:08:05.325 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:05.325 Storage Tag Check Read Support: No 00:08:05.325 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Namespace ID:3 00:08:05.325 Error Recovery Timeout: Unlimited 00:08:05.325 Command Set Identifier: NVM (00h) 00:08:05.325 Deallocate: Supported 00:08:05.325 Deallocated/Unwritten Error: Supported 00:08:05.325 Deallocated Read Value: All 0x00 00:08:05.325 Deallocate in Write Zeroes: Not Supported 00:08:05.325 Deallocated Guard Field: 0xFFFF 00:08:05.325 Flush: Supported 00:08:05.325 Reservation: Not Supported 00:08:05.325 Namespace Sharing Capabilities: Private 00:08:05.325 Size (in LBAs): 1048576 (4GiB) 00:08:05.325 Capacity (in LBAs): 1048576 (4GiB) 00:08:05.325 Utilization (in LBAs): 1048576 (4GiB) 00:08:05.325 Thin Provisioning: Not Supported 00:08:05.325 Per-NS Atomic Units: No 00:08:05.325 Maximum Single Source Range Length: 128 00:08:05.325 Maximum Copy Length: 128 00:08:05.325 Maximum Source Range Count: 128 00:08:05.325 NGUID/EUI64 Never Reused: No 00:08:05.325 Namespace Write Protected: No 00:08:05.325 Number of LBA Formats: 8 00:08:05.325 Current LBA Format: LBA Format #04 00:08:05.325 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:05.325 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:05.325 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:05.325 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:05.325 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:05.325 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:05.325 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:05.325 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:05.325 00:08:05.325 NVM Specific Namespace Data 00:08:05.325 =========================== 00:08:05.325 Logical Block Storage Tag Mask: 0 00:08:05.325 Protection Information Capabilities: 00:08:05.325 16b Guard Protection Information Storage Tag Support: No 00:08:05.325 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:05.325 Storage Tag Check Read Support: No 00:08:05.325 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.325 03:34:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:05.325 03:34:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:08:05.583 ===================================================== 00:08:05.583 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:05.583 ===================================================== 00:08:05.583 Controller Capabilities/Features 00:08:05.583 ================================ 00:08:05.583 Vendor ID: 1b36 00:08:05.583 Subsystem Vendor ID: 1af4 00:08:05.583 Serial Number: 12340 00:08:05.583 Model Number: QEMU NVMe Ctrl 00:08:05.583 Firmware Version: 8.0.0 00:08:05.583 Recommended Arb Burst: 6 00:08:05.583 IEEE OUI Identifier: 00 54 52 00:08:05.583 Multi-path I/O 00:08:05.583 May have multiple subsystem ports: No 00:08:05.583 May have multiple controllers: No 00:08:05.583 Associated with SR-IOV VF: No 00:08:05.583 Max Data Transfer Size: 524288 00:08:05.583 Max Number of Namespaces: 256 00:08:05.583 Max Number of I/O Queues: 64 00:08:05.583 NVMe Specification Version (VS): 1.4 00:08:05.583 NVMe Specification Version (Identify): 1.4 00:08:05.583 Maximum Queue Entries: 2048 00:08:05.583 Contiguous Queues Required: Yes 00:08:05.583 Arbitration Mechanisms Supported 00:08:05.583 Weighted Round Robin: Not Supported 00:08:05.583 Vendor Specific: Not Supported 00:08:05.583 Reset Timeout: 7500 ms 00:08:05.583 Doorbell Stride: 4 bytes 00:08:05.583 NVM Subsystem Reset: Not Supported 00:08:05.583 Command Sets Supported 00:08:05.583 NVM Command Set: Supported 00:08:05.583 Boot Partition: Not Supported 00:08:05.583 Memory Page Size Minimum: 4096 bytes 00:08:05.583 Memory Page Size Maximum: 65536 bytes 00:08:05.583 Persistent Memory Region: Not Supported 00:08:05.583 Optional Asynchronous Events Supported 00:08:05.583 Namespace Attribute Notices: Supported 00:08:05.583 Firmware Activation Notices: Not Supported 00:08:05.583 ANA Change Notices: Not Supported 00:08:05.583 PLE Aggregate Log Change Notices: Not Supported 00:08:05.583 LBA Status Info Alert Notices: Not Supported 00:08:05.583 EGE Aggregate Log Change Notices: Not Supported 00:08:05.583 Normal NVM Subsystem Shutdown event: Not Supported 00:08:05.583 Zone Descriptor Change Notices: Not Supported 00:08:05.583 Discovery Log Change Notices: Not Supported 00:08:05.583 Controller Attributes 00:08:05.583 128-bit Host Identifier: Not Supported 00:08:05.583 Non-Operational Permissive Mode: Not Supported 00:08:05.583 NVM Sets: Not Supported 00:08:05.583 Read Recovery Levels: Not Supported 00:08:05.583 Endurance Groups: Not Supported 00:08:05.583 Predictable Latency Mode: Not Supported 00:08:05.584 Traffic Based Keep ALive: Not Supported 00:08:05.584 Namespace Granularity: Not Supported 00:08:05.584 SQ Associations: Not Supported 00:08:05.584 UUID List: Not Supported 00:08:05.584 Multi-Domain Subsystem: Not Supported 00:08:05.584 Fixed Capacity Management: Not Supported 00:08:05.584 Variable Capacity Management: Not Supported 00:08:05.584 Delete Endurance Group: Not Supported 00:08:05.584 Delete NVM Set: Not Supported 00:08:05.584 Extended LBA Formats Supported: Supported 00:08:05.584 Flexible Data Placement Supported: Not Supported 00:08:05.584 00:08:05.584 Controller Memory Buffer Support 00:08:05.584 ================================ 00:08:05.584 Supported: No 00:08:05.584 00:08:05.584 Persistent Memory Region Support 00:08:05.584 ================================ 00:08:05.584 Supported: No 00:08:05.584 00:08:05.584 Admin Command Set Attributes 00:08:05.584 ============================ 00:08:05.584 Security Send/Receive: Not Supported 00:08:05.584 Format NVM: Supported 00:08:05.584 Firmware Activate/Download: Not Supported 00:08:05.584 Namespace Management: Supported 00:08:05.584 Device Self-Test: Not Supported 00:08:05.584 Directives: Supported 00:08:05.584 NVMe-MI: Not Supported 00:08:05.584 Virtualization Management: Not Supported 00:08:05.584 Doorbell Buffer Config: Supported 00:08:05.584 Get LBA Status Capability: Not Supported 00:08:05.584 Command & Feature Lockdown Capability: Not Supported 00:08:05.584 Abort Command Limit: 4 00:08:05.584 Async Event Request Limit: 4 00:08:05.584 Number of Firmware Slots: N/A 00:08:05.584 Firmware Slot 1 Read-Only: N/A 00:08:05.584 Firmware Activation Without Reset: N/A 00:08:05.584 Multiple Update Detection Support: N/A 00:08:05.584 Firmware Update Granularity: No Information Provided 00:08:05.584 Per-Namespace SMART Log: Yes 00:08:05.584 Asymmetric Namespace Access Log Page: Not Supported 00:08:05.584 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:08:05.584 Command Effects Log Page: Supported 00:08:05.584 Get Log Page Extended Data: Supported 00:08:05.584 Telemetry Log Pages: Not Supported 00:08:05.584 Persistent Event Log Pages: Not Supported 00:08:05.584 Supported Log Pages Log Page: May Support 00:08:05.584 Commands Supported & Effects Log Page: Not Supported 00:08:05.584 Feature Identifiers & Effects Log Page:May Support 00:08:05.584 NVMe-MI Commands & Effects Log Page: May Support 00:08:05.584 Data Area 4 for Telemetry Log: Not Supported 00:08:05.584 Error Log Page Entries Supported: 1 00:08:05.584 Keep Alive: Not Supported 00:08:05.584 00:08:05.584 NVM Command Set Attributes 00:08:05.584 ========================== 00:08:05.584 Submission Queue Entry Size 00:08:05.584 Max: 64 00:08:05.584 Min: 64 00:08:05.584 Completion Queue Entry Size 00:08:05.584 Max: 16 00:08:05.584 Min: 16 00:08:05.584 Number of Namespaces: 256 00:08:05.584 Compare Command: Supported 00:08:05.584 Write Uncorrectable Command: Not Supported 00:08:05.584 Dataset Management Command: Supported 00:08:05.584 Write Zeroes Command: Supported 00:08:05.584 Set Features Save Field: Supported 00:08:05.584 Reservations: Not Supported 00:08:05.584 Timestamp: Supported 00:08:05.584 Copy: Supported 00:08:05.584 Volatile Write Cache: Present 00:08:05.584 Atomic Write Unit (Normal): 1 00:08:05.584 Atomic Write Unit (PFail): 1 00:08:05.584 Atomic Compare & Write Unit: 1 00:08:05.584 Fused Compare & Write: Not Supported 00:08:05.584 Scatter-Gather List 00:08:05.584 SGL Command Set: Supported 00:08:05.584 SGL Keyed: Not Supported 00:08:05.584 SGL Bit Bucket Descriptor: Not Supported 00:08:05.584 SGL Metadata Pointer: Not Supported 00:08:05.584 Oversized SGL: Not Supported 00:08:05.584 SGL Metadata Address: Not Supported 00:08:05.584 SGL Offset: Not Supported 00:08:05.584 Transport SGL Data Block: Not Supported 00:08:05.584 Replay Protected Memory Block: Not Supported 00:08:05.584 00:08:05.584 Firmware Slot Information 00:08:05.584 ========================= 00:08:05.584 Active slot: 1 00:08:05.584 Slot 1 Firmware Revision: 1.0 00:08:05.584 00:08:05.584 00:08:05.584 Commands Supported and Effects 00:08:05.584 ============================== 00:08:05.584 Admin Commands 00:08:05.584 -------------- 00:08:05.584 Delete I/O Submission Queue (00h): Supported 00:08:05.584 Create I/O Submission Queue (01h): Supported 00:08:05.584 Get Log Page (02h): Supported 00:08:05.584 Delete I/O Completion Queue (04h): Supported 00:08:05.584 Create I/O Completion Queue (05h): Supported 00:08:05.584 Identify (06h): Supported 00:08:05.584 Abort (08h): Supported 00:08:05.584 Set Features (09h): Supported 00:08:05.584 Get Features (0Ah): Supported 00:08:05.584 Asynchronous Event Request (0Ch): Supported 00:08:05.584 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:05.584 Directive Send (19h): Supported 00:08:05.584 Directive Receive (1Ah): Supported 00:08:05.584 Virtualization Management (1Ch): Supported 00:08:05.584 Doorbell Buffer Config (7Ch): Supported 00:08:05.584 Format NVM (80h): Supported LBA-Change 00:08:05.584 I/O Commands 00:08:05.584 ------------ 00:08:05.584 Flush (00h): Supported LBA-Change 00:08:05.584 Write (01h): Supported LBA-Change 00:08:05.584 Read (02h): Supported 00:08:05.584 Compare (05h): Supported 00:08:05.584 Write Zeroes (08h): Supported LBA-Change 00:08:05.584 Dataset Management (09h): Supported LBA-Change 00:08:05.584 Unknown (0Ch): Supported 00:08:05.584 Unknown (12h): Supported 00:08:05.584 Copy (19h): Supported LBA-Change 00:08:05.584 Unknown (1Dh): Supported LBA-Change 00:08:05.584 00:08:05.584 Error Log 00:08:05.584 ========= 00:08:05.584 00:08:05.584 Arbitration 00:08:05.584 =========== 00:08:05.584 Arbitration Burst: no limit 00:08:05.584 00:08:05.584 Power Management 00:08:05.584 ================ 00:08:05.584 Number of Power States: 1 00:08:05.584 Current Power State: Power State #0 00:08:05.584 Power State #0: 00:08:05.584 Max Power: 25.00 W 00:08:05.584 Non-Operational State: Operational 00:08:05.584 Entry Latency: 16 microseconds 00:08:05.584 Exit Latency: 4 microseconds 00:08:05.584 Relative Read Throughput: 0 00:08:05.584 Relative Read Latency: 0 00:08:05.584 Relative Write Throughput: 0 00:08:05.584 Relative Write Latency: 0 00:08:05.584 Idle Power: Not Reported 00:08:05.584 Active Power: Not Reported 00:08:05.584 Non-Operational Permissive Mode: Not Supported 00:08:05.584 00:08:05.584 Health Information 00:08:05.584 ================== 00:08:05.584 Critical Warnings: 00:08:05.584 Available Spare Space: OK 00:08:05.584 Temperature: OK 00:08:05.584 Device Reliability: OK 00:08:05.584 Read Only: No 00:08:05.584 Volatile Memory Backup: OK 00:08:05.584 Current Temperature: 323 Kelvin (50 Celsius) 00:08:05.584 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:05.584 Available Spare: 0% 00:08:05.584 Available Spare Threshold: 0% 00:08:05.584 Life Percentage Used: 0% 00:08:05.584 Data Units Read: 642 00:08:05.584 Data Units Written: 570 00:08:05.584 Host Read Commands: 33535 00:08:05.584 Host Write Commands: 33321 00:08:05.584 Controller Busy Time: 0 minutes 00:08:05.584 Power Cycles: 0 00:08:05.584 Power On Hours: 0 hours 00:08:05.584 Unsafe Shutdowns: 0 00:08:05.584 Unrecoverable Media Errors: 0 00:08:05.584 Lifetime Error Log Entries: 0 00:08:05.584 Warning Temperature Time: 0 minutes 00:08:05.584 Critical Temperature Time: 0 minutes 00:08:05.584 00:08:05.584 Number of Queues 00:08:05.584 ================ 00:08:05.584 Number of I/O Submission Queues: 64 00:08:05.584 Number of I/O Completion Queues: 64 00:08:05.584 00:08:05.584 ZNS Specific Controller Data 00:08:05.584 ============================ 00:08:05.584 Zone Append Size Limit: 0 00:08:05.584 00:08:05.584 00:08:05.584 Active Namespaces 00:08:05.584 ================= 00:08:05.584 Namespace ID:1 00:08:05.584 Error Recovery Timeout: Unlimited 00:08:05.584 Command Set Identifier: NVM (00h) 00:08:05.584 Deallocate: Supported 00:08:05.584 Deallocated/Unwritten Error: Supported 00:08:05.584 Deallocated Read Value: All 0x00 00:08:05.584 Deallocate in Write Zeroes: Not Supported 00:08:05.584 Deallocated Guard Field: 0xFFFF 00:08:05.584 Flush: Supported 00:08:05.584 Reservation: Not Supported 00:08:05.584 Metadata Transferred as: Separate Metadata Buffer 00:08:05.584 Namespace Sharing Capabilities: Private 00:08:05.584 Size (in LBAs): 1548666 (5GiB) 00:08:05.584 Capacity (in LBAs): 1548666 (5GiB) 00:08:05.584 Utilization (in LBAs): 1548666 (5GiB) 00:08:05.584 Thin Provisioning: Not Supported 00:08:05.584 Per-NS Atomic Units: No 00:08:05.584 Maximum Single Source Range Length: 128 00:08:05.584 Maximum Copy Length: 128 00:08:05.584 Maximum Source Range Count: 128 00:08:05.584 NGUID/EUI64 Never Reused: No 00:08:05.584 Namespace Write Protected: No 00:08:05.584 Number of LBA Formats: 8 00:08:05.584 Current LBA Format: LBA Format #07 00:08:05.584 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:05.584 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:05.584 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:05.584 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:05.584 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:05.584 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:05.584 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:05.584 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:05.584 00:08:05.584 NVM Specific Namespace Data 00:08:05.584 =========================== 00:08:05.584 Logical Block Storage Tag Mask: 0 00:08:05.584 Protection Information Capabilities: 00:08:05.584 16b Guard Protection Information Storage Tag Support: No 00:08:05.584 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:05.584 Storage Tag Check Read Support: No 00:08:05.584 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.584 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.584 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.584 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.584 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.584 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.584 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.584 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.584 03:34:57 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:05.584 03:34:57 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:08:05.584 ===================================================== 00:08:05.584 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:05.584 ===================================================== 00:08:05.584 Controller Capabilities/Features 00:08:05.584 ================================ 00:08:05.584 Vendor ID: 1b36 00:08:05.584 Subsystem Vendor ID: 1af4 00:08:05.584 Serial Number: 12341 00:08:05.584 Model Number: QEMU NVMe Ctrl 00:08:05.584 Firmware Version: 8.0.0 00:08:05.584 Recommended Arb Burst: 6 00:08:05.584 IEEE OUI Identifier: 00 54 52 00:08:05.584 Multi-path I/O 00:08:05.584 May have multiple subsystem ports: No 00:08:05.584 May have multiple controllers: No 00:08:05.584 Associated with SR-IOV VF: No 00:08:05.584 Max Data Transfer Size: 524288 00:08:05.584 Max Number of Namespaces: 256 00:08:05.584 Max Number of I/O Queues: 64 00:08:05.584 NVMe Specification Version (VS): 1.4 00:08:05.584 NVMe Specification Version (Identify): 1.4 00:08:05.584 Maximum Queue Entries: 2048 00:08:05.584 Contiguous Queues Required: Yes 00:08:05.584 Arbitration Mechanisms Supported 00:08:05.584 Weighted Round Robin: Not Supported 00:08:05.584 Vendor Specific: Not Supported 00:08:05.584 Reset Timeout: 7500 ms 00:08:05.584 Doorbell Stride: 4 bytes 00:08:05.584 NVM Subsystem Reset: Not Supported 00:08:05.584 Command Sets Supported 00:08:05.584 NVM Command Set: Supported 00:08:05.584 Boot Partition: Not Supported 00:08:05.584 Memory Page Size Minimum: 4096 bytes 00:08:05.584 Memory Page Size Maximum: 65536 bytes 00:08:05.584 Persistent Memory Region: Not Supported 00:08:05.584 Optional Asynchronous Events Supported 00:08:05.584 Namespace Attribute Notices: Supported 00:08:05.584 Firmware Activation Notices: Not Supported 00:08:05.584 ANA Change Notices: Not Supported 00:08:05.584 PLE Aggregate Log Change Notices: Not Supported 00:08:05.584 LBA Status Info Alert Notices: Not Supported 00:08:05.584 EGE Aggregate Log Change Notices: Not Supported 00:08:05.584 Normal NVM Subsystem Shutdown event: Not Supported 00:08:05.584 Zone Descriptor Change Notices: Not Supported 00:08:05.584 Discovery Log Change Notices: Not Supported 00:08:05.584 Controller Attributes 00:08:05.584 128-bit Host Identifier: Not Supported 00:08:05.584 Non-Operational Permissive Mode: Not Supported 00:08:05.584 NVM Sets: Not Supported 00:08:05.584 Read Recovery Levels: Not Supported 00:08:05.584 Endurance Groups: Not Supported 00:08:05.584 Predictable Latency Mode: Not Supported 00:08:05.584 Traffic Based Keep ALive: Not Supported 00:08:05.584 Namespace Granularity: Not Supported 00:08:05.584 SQ Associations: Not Supported 00:08:05.584 UUID List: Not Supported 00:08:05.584 Multi-Domain Subsystem: Not Supported 00:08:05.584 Fixed Capacity Management: Not Supported 00:08:05.584 Variable Capacity Management: Not Supported 00:08:05.584 Delete Endurance Group: Not Supported 00:08:05.584 Delete NVM Set: Not Supported 00:08:05.584 Extended LBA Formats Supported: Supported 00:08:05.584 Flexible Data Placement Supported: Not Supported 00:08:05.584 00:08:05.584 Controller Memory Buffer Support 00:08:05.584 ================================ 00:08:05.584 Supported: No 00:08:05.584 00:08:05.584 Persistent Memory Region Support 00:08:05.584 ================================ 00:08:05.584 Supported: No 00:08:05.584 00:08:05.584 Admin Command Set Attributes 00:08:05.584 ============================ 00:08:05.584 Security Send/Receive: Not Supported 00:08:05.584 Format NVM: Supported 00:08:05.584 Firmware Activate/Download: Not Supported 00:08:05.584 Namespace Management: Supported 00:08:05.584 Device Self-Test: Not Supported 00:08:05.584 Directives: Supported 00:08:05.584 NVMe-MI: Not Supported 00:08:05.584 Virtualization Management: Not Supported 00:08:05.584 Doorbell Buffer Config: Supported 00:08:05.584 Get LBA Status Capability: Not Supported 00:08:05.584 Command & Feature Lockdown Capability: Not Supported 00:08:05.584 Abort Command Limit: 4 00:08:05.584 Async Event Request Limit: 4 00:08:05.584 Number of Firmware Slots: N/A 00:08:05.584 Firmware Slot 1 Read-Only: N/A 00:08:05.584 Firmware Activation Without Reset: N/A 00:08:05.584 Multiple Update Detection Support: N/A 00:08:05.584 Firmware Update Granularity: No Information Provided 00:08:05.584 Per-Namespace SMART Log: Yes 00:08:05.584 Asymmetric Namespace Access Log Page: Not Supported 00:08:05.584 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:08:05.584 Command Effects Log Page: Supported 00:08:05.584 Get Log Page Extended Data: Supported 00:08:05.584 Telemetry Log Pages: Not Supported 00:08:05.584 Persistent Event Log Pages: Not Supported 00:08:05.584 Supported Log Pages Log Page: May Support 00:08:05.584 Commands Supported & Effects Log Page: Not Supported 00:08:05.584 Feature Identifiers & Effects Log Page:May Support 00:08:05.584 NVMe-MI Commands & Effects Log Page: May Support 00:08:05.584 Data Area 4 for Telemetry Log: Not Supported 00:08:05.584 Error Log Page Entries Supported: 1 00:08:05.584 Keep Alive: Not Supported 00:08:05.584 00:08:05.584 NVM Command Set Attributes 00:08:05.584 ========================== 00:08:05.584 Submission Queue Entry Size 00:08:05.584 Max: 64 00:08:05.584 Min: 64 00:08:05.584 Completion Queue Entry Size 00:08:05.584 Max: 16 00:08:05.584 Min: 16 00:08:05.584 Number of Namespaces: 256 00:08:05.584 Compare Command: Supported 00:08:05.584 Write Uncorrectable Command: Not Supported 00:08:05.584 Dataset Management Command: Supported 00:08:05.584 Write Zeroes Command: Supported 00:08:05.584 Set Features Save Field: Supported 00:08:05.584 Reservations: Not Supported 00:08:05.584 Timestamp: Supported 00:08:05.584 Copy: Supported 00:08:05.584 Volatile Write Cache: Present 00:08:05.585 Atomic Write Unit (Normal): 1 00:08:05.585 Atomic Write Unit (PFail): 1 00:08:05.585 Atomic Compare & Write Unit: 1 00:08:05.585 Fused Compare & Write: Not Supported 00:08:05.585 Scatter-Gather List 00:08:05.585 SGL Command Set: Supported 00:08:05.585 SGL Keyed: Not Supported 00:08:05.585 SGL Bit Bucket Descriptor: Not Supported 00:08:05.585 SGL Metadata Pointer: Not Supported 00:08:05.585 Oversized SGL: Not Supported 00:08:05.585 SGL Metadata Address: Not Supported 00:08:05.585 SGL Offset: Not Supported 00:08:05.585 Transport SGL Data Block: Not Supported 00:08:05.585 Replay Protected Memory Block: Not Supported 00:08:05.585 00:08:05.585 Firmware Slot Information 00:08:05.585 ========================= 00:08:05.585 Active slot: 1 00:08:05.585 Slot 1 Firmware Revision: 1.0 00:08:05.585 00:08:05.585 00:08:05.585 Commands Supported and Effects 00:08:05.585 ============================== 00:08:05.585 Admin Commands 00:08:05.585 -------------- 00:08:05.585 Delete I/O Submission Queue (00h): Supported 00:08:05.585 Create I/O Submission Queue (01h): Supported 00:08:05.585 Get Log Page (02h): Supported 00:08:05.585 Delete I/O Completion Queue (04h): Supported 00:08:05.585 Create I/O Completion Queue (05h): Supported 00:08:05.585 Identify (06h): Supported 00:08:05.585 Abort (08h): Supported 00:08:05.585 Set Features (09h): Supported 00:08:05.585 Get Features (0Ah): Supported 00:08:05.585 Asynchronous Event Request (0Ch): Supported 00:08:05.585 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:05.585 Directive Send (19h): Supported 00:08:05.585 Directive Receive (1Ah): Supported 00:08:05.585 Virtualization Management (1Ch): Supported 00:08:05.585 Doorbell Buffer Config (7Ch): Supported 00:08:05.585 Format NVM (80h): Supported LBA-Change 00:08:05.585 I/O Commands 00:08:05.585 ------------ 00:08:05.585 Flush (00h): Supported LBA-Change 00:08:05.585 Write (01h): Supported LBA-Change 00:08:05.585 Read (02h): Supported 00:08:05.585 Compare (05h): Supported 00:08:05.585 Write Zeroes (08h): Supported LBA-Change 00:08:05.585 Dataset Management (09h): Supported LBA-Change 00:08:05.585 Unknown (0Ch): Supported 00:08:05.585 Unknown (12h): Supported 00:08:05.585 Copy (19h): Supported LBA-Change 00:08:05.585 Unknown (1Dh): Supported LBA-Change 00:08:05.585 00:08:05.585 Error Log 00:08:05.585 ========= 00:08:05.585 00:08:05.585 Arbitration 00:08:05.585 =========== 00:08:05.585 Arbitration Burst: no limit 00:08:05.585 00:08:05.585 Power Management 00:08:05.585 ================ 00:08:05.585 Number of Power States: 1 00:08:05.585 Current Power State: Power State #0 00:08:05.585 Power State #0: 00:08:05.585 Max Power: 25.00 W 00:08:05.585 Non-Operational State: Operational 00:08:05.585 Entry Latency: 16 microseconds 00:08:05.585 Exit Latency: 4 microseconds 00:08:05.585 Relative Read Throughput: 0 00:08:05.585 Relative Read Latency: 0 00:08:05.585 Relative Write Throughput: 0 00:08:05.585 Relative Write Latency: 0 00:08:05.842 Idle Power: Not Reported 00:08:05.842 Active Power: Not Reported 00:08:05.842 Non-Operational Permissive Mode: Not Supported 00:08:05.842 00:08:05.842 Health Information 00:08:05.842 ================== 00:08:05.842 Critical Warnings: 00:08:05.842 Available Spare Space: OK 00:08:05.842 Temperature: OK 00:08:05.842 Device Reliability: OK 00:08:05.842 Read Only: No 00:08:05.842 Volatile Memory Backup: OK 00:08:05.842 Current Temperature: 323 Kelvin (50 Celsius) 00:08:05.842 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:05.842 Available Spare: 0% 00:08:05.842 Available Spare Threshold: 0% 00:08:05.842 Life Percentage Used: 0% 00:08:05.842 Data Units Read: 971 00:08:05.842 Data Units Written: 837 00:08:05.842 Host Read Commands: 50328 00:08:05.842 Host Write Commands: 49092 00:08:05.842 Controller Busy Time: 0 minutes 00:08:05.842 Power Cycles: 0 00:08:05.842 Power On Hours: 0 hours 00:08:05.842 Unsafe Shutdowns: 0 00:08:05.842 Unrecoverable Media Errors: 0 00:08:05.842 Lifetime Error Log Entries: 0 00:08:05.842 Warning Temperature Time: 0 minutes 00:08:05.842 Critical Temperature Time: 0 minutes 00:08:05.842 00:08:05.842 Number of Queues 00:08:05.842 ================ 00:08:05.842 Number of I/O Submission Queues: 64 00:08:05.842 Number of I/O Completion Queues: 64 00:08:05.842 00:08:05.842 ZNS Specific Controller Data 00:08:05.842 ============================ 00:08:05.842 Zone Append Size Limit: 0 00:08:05.842 00:08:05.842 00:08:05.842 Active Namespaces 00:08:05.842 ================= 00:08:05.843 Namespace ID:1 00:08:05.843 Error Recovery Timeout: Unlimited 00:08:05.843 Command Set Identifier: NVM (00h) 00:08:05.843 Deallocate: Supported 00:08:05.843 Deallocated/Unwritten Error: Supported 00:08:05.843 Deallocated Read Value: All 0x00 00:08:05.843 Deallocate in Write Zeroes: Not Supported 00:08:05.843 Deallocated Guard Field: 0xFFFF 00:08:05.843 Flush: Supported 00:08:05.843 Reservation: Not Supported 00:08:05.843 Namespace Sharing Capabilities: Private 00:08:05.843 Size (in LBAs): 1310720 (5GiB) 00:08:05.843 Capacity (in LBAs): 1310720 (5GiB) 00:08:05.843 Utilization (in LBAs): 1310720 (5GiB) 00:08:05.843 Thin Provisioning: Not Supported 00:08:05.843 Per-NS Atomic Units: No 00:08:05.843 Maximum Single Source Range Length: 128 00:08:05.843 Maximum Copy Length: 128 00:08:05.843 Maximum Source Range Count: 128 00:08:05.843 NGUID/EUI64 Never Reused: No 00:08:05.843 Namespace Write Protected: No 00:08:05.843 Number of LBA Formats: 8 00:08:05.843 Current LBA Format: LBA Format #04 00:08:05.843 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:05.843 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:05.843 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:05.843 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:05.843 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:05.843 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:05.843 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:05.843 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:05.843 00:08:05.843 NVM Specific Namespace Data 00:08:05.843 =========================== 00:08:05.843 Logical Block Storage Tag Mask: 0 00:08:05.843 Protection Information Capabilities: 00:08:05.843 16b Guard Protection Information Storage Tag Support: No 00:08:05.843 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:05.843 Storage Tag Check Read Support: No 00:08:05.843 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.843 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.843 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.843 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.843 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.843 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.843 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.843 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.843 03:34:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:05.843 03:34:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:08:05.843 ===================================================== 00:08:05.843 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:05.843 ===================================================== 00:08:05.843 Controller Capabilities/Features 00:08:05.843 ================================ 00:08:05.843 Vendor ID: 1b36 00:08:05.843 Subsystem Vendor ID: 1af4 00:08:05.843 Serial Number: 12342 00:08:05.843 Model Number: QEMU NVMe Ctrl 00:08:05.843 Firmware Version: 8.0.0 00:08:05.843 Recommended Arb Burst: 6 00:08:05.843 IEEE OUI Identifier: 00 54 52 00:08:05.843 Multi-path I/O 00:08:05.843 May have multiple subsystem ports: No 00:08:05.843 May have multiple controllers: No 00:08:05.843 Associated with SR-IOV VF: No 00:08:05.843 Max Data Transfer Size: 524288 00:08:05.843 Max Number of Namespaces: 256 00:08:05.843 Max Number of I/O Queues: 64 00:08:05.843 NVMe Specification Version (VS): 1.4 00:08:05.843 NVMe Specification Version (Identify): 1.4 00:08:05.843 Maximum Queue Entries: 2048 00:08:05.843 Contiguous Queues Required: Yes 00:08:05.843 Arbitration Mechanisms Supported 00:08:05.843 Weighted Round Robin: Not Supported 00:08:05.843 Vendor Specific: Not Supported 00:08:05.843 Reset Timeout: 7500 ms 00:08:05.843 Doorbell Stride: 4 bytes 00:08:05.843 NVM Subsystem Reset: Not Supported 00:08:05.843 Command Sets Supported 00:08:05.843 NVM Command Set: Supported 00:08:05.843 Boot Partition: Not Supported 00:08:05.843 Memory Page Size Minimum: 4096 bytes 00:08:05.843 Memory Page Size Maximum: 65536 bytes 00:08:05.843 Persistent Memory Region: Not Supported 00:08:05.843 Optional Asynchronous Events Supported 00:08:05.843 Namespace Attribute Notices: Supported 00:08:05.843 Firmware Activation Notices: Not Supported 00:08:05.843 ANA Change Notices: Not Supported 00:08:05.843 PLE Aggregate Log Change Notices: Not Supported 00:08:05.843 LBA Status Info Alert Notices: Not Supported 00:08:05.843 EGE Aggregate Log Change Notices: Not Supported 00:08:05.843 Normal NVM Subsystem Shutdown event: Not Supported 00:08:05.843 Zone Descriptor Change Notices: Not Supported 00:08:05.843 Discovery Log Change Notices: Not Supported 00:08:05.843 Controller Attributes 00:08:05.843 128-bit Host Identifier: Not Supported 00:08:05.843 Non-Operational Permissive Mode: Not Supported 00:08:05.843 NVM Sets: Not Supported 00:08:05.843 Read Recovery Levels: Not Supported 00:08:05.843 Endurance Groups: Not Supported 00:08:05.843 Predictable Latency Mode: Not Supported 00:08:05.843 Traffic Based Keep ALive: Not Supported 00:08:05.843 Namespace Granularity: Not Supported 00:08:05.843 SQ Associations: Not Supported 00:08:05.843 UUID List: Not Supported 00:08:05.843 Multi-Domain Subsystem: Not Supported 00:08:05.843 Fixed Capacity Management: Not Supported 00:08:05.843 Variable Capacity Management: Not Supported 00:08:05.843 Delete Endurance Group: Not Supported 00:08:05.843 Delete NVM Set: Not Supported 00:08:05.843 Extended LBA Formats Supported: Supported 00:08:05.843 Flexible Data Placement Supported: Not Supported 00:08:05.843 00:08:05.843 Controller Memory Buffer Support 00:08:05.843 ================================ 00:08:05.843 Supported: No 00:08:05.843 00:08:05.843 Persistent Memory Region Support 00:08:05.843 ================================ 00:08:05.843 Supported: No 00:08:05.843 00:08:05.843 Admin Command Set Attributes 00:08:05.843 ============================ 00:08:05.843 Security Send/Receive: Not Supported 00:08:05.843 Format NVM: Supported 00:08:05.843 Firmware Activate/Download: Not Supported 00:08:05.843 Namespace Management: Supported 00:08:05.843 Device Self-Test: Not Supported 00:08:05.843 Directives: Supported 00:08:05.843 NVMe-MI: Not Supported 00:08:05.843 Virtualization Management: Not Supported 00:08:05.843 Doorbell Buffer Config: Supported 00:08:05.843 Get LBA Status Capability: Not Supported 00:08:05.843 Command & Feature Lockdown Capability: Not Supported 00:08:05.843 Abort Command Limit: 4 00:08:05.843 Async Event Request Limit: 4 00:08:05.843 Number of Firmware Slots: N/A 00:08:05.843 Firmware Slot 1 Read-Only: N/A 00:08:05.843 Firmware Activation Without Reset: N/A 00:08:05.843 Multiple Update Detection Support: N/A 00:08:05.843 Firmware Update Granularity: No Information Provided 00:08:05.843 Per-Namespace SMART Log: Yes 00:08:05.843 Asymmetric Namespace Access Log Page: Not Supported 00:08:05.843 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:08:05.843 Command Effects Log Page: Supported 00:08:05.843 Get Log Page Extended Data: Supported 00:08:05.843 Telemetry Log Pages: Not Supported 00:08:05.843 Persistent Event Log Pages: Not Supported 00:08:05.843 Supported Log Pages Log Page: May Support 00:08:05.843 Commands Supported & Effects Log Page: Not Supported 00:08:05.843 Feature Identifiers & Effects Log Page:May Support 00:08:05.843 NVMe-MI Commands & Effects Log Page: May Support 00:08:05.843 Data Area 4 for Telemetry Log: Not Supported 00:08:05.843 Error Log Page Entries Supported: 1 00:08:05.843 Keep Alive: Not Supported 00:08:05.843 00:08:05.843 NVM Command Set Attributes 00:08:05.843 ========================== 00:08:05.843 Submission Queue Entry Size 00:08:05.843 Max: 64 00:08:05.843 Min: 64 00:08:05.843 Completion Queue Entry Size 00:08:05.843 Max: 16 00:08:05.843 Min: 16 00:08:05.843 Number of Namespaces: 256 00:08:05.843 Compare Command: Supported 00:08:05.843 Write Uncorrectable Command: Not Supported 00:08:05.843 Dataset Management Command: Supported 00:08:05.843 Write Zeroes Command: Supported 00:08:05.843 Set Features Save Field: Supported 00:08:05.843 Reservations: Not Supported 00:08:05.843 Timestamp: Supported 00:08:05.843 Copy: Supported 00:08:05.843 Volatile Write Cache: Present 00:08:05.843 Atomic Write Unit (Normal): 1 00:08:05.844 Atomic Write Unit (PFail): 1 00:08:05.844 Atomic Compare & Write Unit: 1 00:08:05.844 Fused Compare & Write: Not Supported 00:08:05.844 Scatter-Gather List 00:08:05.844 SGL Command Set: Supported 00:08:05.844 SGL Keyed: Not Supported 00:08:05.844 SGL Bit Bucket Descriptor: Not Supported 00:08:05.844 SGL Metadata Pointer: Not Supported 00:08:05.844 Oversized SGL: Not Supported 00:08:05.844 SGL Metadata Address: Not Supported 00:08:05.844 SGL Offset: Not Supported 00:08:05.844 Transport SGL Data Block: Not Supported 00:08:05.844 Replay Protected Memory Block: Not Supported 00:08:05.844 00:08:05.844 Firmware Slot Information 00:08:05.844 ========================= 00:08:05.844 Active slot: 1 00:08:05.844 Slot 1 Firmware Revision: 1.0 00:08:05.844 00:08:05.844 00:08:05.844 Commands Supported and Effects 00:08:05.844 ============================== 00:08:05.844 Admin Commands 00:08:05.844 -------------- 00:08:05.844 Delete I/O Submission Queue (00h): Supported 00:08:05.844 Create I/O Submission Queue (01h): Supported 00:08:05.844 Get Log Page (02h): Supported 00:08:05.844 Delete I/O Completion Queue (04h): Supported 00:08:05.844 Create I/O Completion Queue (05h): Supported 00:08:05.844 Identify (06h): Supported 00:08:05.844 Abort (08h): Supported 00:08:05.844 Set Features (09h): Supported 00:08:05.844 Get Features (0Ah): Supported 00:08:05.844 Asynchronous Event Request (0Ch): Supported 00:08:05.844 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:05.844 Directive Send (19h): Supported 00:08:05.844 Directive Receive (1Ah): Supported 00:08:05.844 Virtualization Management (1Ch): Supported 00:08:05.844 Doorbell Buffer Config (7Ch): Supported 00:08:05.844 Format NVM (80h): Supported LBA-Change 00:08:05.844 I/O Commands 00:08:05.844 ------------ 00:08:05.844 Flush (00h): Supported LBA-Change 00:08:05.844 Write (01h): Supported LBA-Change 00:08:05.844 Read (02h): Supported 00:08:05.844 Compare (05h): Supported 00:08:05.844 Write Zeroes (08h): Supported LBA-Change 00:08:05.844 Dataset Management (09h): Supported LBA-Change 00:08:05.844 Unknown (0Ch): Supported 00:08:05.844 Unknown (12h): Supported 00:08:05.844 Copy (19h): Supported LBA-Change 00:08:05.844 Unknown (1Dh): Supported LBA-Change 00:08:05.844 00:08:05.844 Error Log 00:08:05.844 ========= 00:08:05.844 00:08:05.844 Arbitration 00:08:05.844 =========== 00:08:05.844 Arbitration Burst: no limit 00:08:05.844 00:08:05.844 Power Management 00:08:05.844 ================ 00:08:05.844 Number of Power States: 1 00:08:05.844 Current Power State: Power State #0 00:08:05.844 Power State #0: 00:08:05.844 Max Power: 25.00 W 00:08:05.844 Non-Operational State: Operational 00:08:05.844 Entry Latency: 16 microseconds 00:08:05.844 Exit Latency: 4 microseconds 00:08:05.844 Relative Read Throughput: 0 00:08:05.844 Relative Read Latency: 0 00:08:05.844 Relative Write Throughput: 0 00:08:05.844 Relative Write Latency: 0 00:08:05.844 Idle Power: Not Reported 00:08:05.844 Active Power: Not Reported 00:08:05.844 Non-Operational Permissive Mode: Not Supported 00:08:05.844 00:08:05.844 Health Information 00:08:05.844 ================== 00:08:05.844 Critical Warnings: 00:08:05.844 Available Spare Space: OK 00:08:05.844 Temperature: OK 00:08:05.844 Device Reliability: OK 00:08:05.844 Read Only: No 00:08:05.844 Volatile Memory Backup: OK 00:08:05.844 Current Temperature: 323 Kelvin (50 Celsius) 00:08:05.844 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:05.844 Available Spare: 0% 00:08:05.844 Available Spare Threshold: 0% 00:08:05.844 Life Percentage Used: 0% 00:08:05.844 Data Units Read: 2215 00:08:05.844 Data Units Written: 2002 00:08:05.844 Host Read Commands: 104185 00:08:05.844 Host Write Commands: 102454 00:08:05.844 Controller Busy Time: 0 minutes 00:08:05.844 Power Cycles: 0 00:08:05.844 Power On Hours: 0 hours 00:08:05.844 Unsafe Shutdowns: 0 00:08:05.844 Unrecoverable Media Errors: 0 00:08:05.844 Lifetime Error Log Entries: 0 00:08:05.844 Warning Temperature Time: 0 minutes 00:08:05.844 Critical Temperature Time: 0 minutes 00:08:05.844 00:08:05.844 Number of Queues 00:08:05.844 ================ 00:08:05.844 Number of I/O Submission Queues: 64 00:08:05.844 Number of I/O Completion Queues: 64 00:08:05.844 00:08:05.844 ZNS Specific Controller Data 00:08:05.844 ============================ 00:08:05.844 Zone Append Size Limit: 0 00:08:05.844 00:08:05.844 00:08:05.844 Active Namespaces 00:08:05.844 ================= 00:08:05.844 Namespace ID:1 00:08:05.844 Error Recovery Timeout: Unlimited 00:08:05.844 Command Set Identifier: NVM (00h) 00:08:05.844 Deallocate: Supported 00:08:05.844 Deallocated/Unwritten Error: Supported 00:08:05.844 Deallocated Read Value: All 0x00 00:08:05.844 Deallocate in Write Zeroes: Not Supported 00:08:05.844 Deallocated Guard Field: 0xFFFF 00:08:05.844 Flush: Supported 00:08:05.844 Reservation: Not Supported 00:08:05.844 Namespace Sharing Capabilities: Private 00:08:05.844 Size (in LBAs): 1048576 (4GiB) 00:08:05.844 Capacity (in LBAs): 1048576 (4GiB) 00:08:05.844 Utilization (in LBAs): 1048576 (4GiB) 00:08:05.844 Thin Provisioning: Not Supported 00:08:05.844 Per-NS Atomic Units: No 00:08:05.844 Maximum Single Source Range Length: 128 00:08:05.844 Maximum Copy Length: 128 00:08:05.844 Maximum Source Range Count: 128 00:08:05.844 NGUID/EUI64 Never Reused: No 00:08:05.844 Namespace Write Protected: No 00:08:05.844 Number of LBA Formats: 8 00:08:05.844 Current LBA Format: LBA Format #04 00:08:05.844 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:05.844 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:05.844 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:05.844 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:05.844 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:05.844 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:05.844 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:05.844 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:05.844 00:08:05.844 NVM Specific Namespace Data 00:08:05.844 =========================== 00:08:05.844 Logical Block Storage Tag Mask: 0 00:08:05.844 Protection Information Capabilities: 00:08:05.844 16b Guard Protection Information Storage Tag Support: No 00:08:05.844 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:05.844 Storage Tag Check Read Support: No 00:08:05.844 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.844 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.844 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.844 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.844 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.844 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.844 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.844 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.844 Namespace ID:2 00:08:05.844 Error Recovery Timeout: Unlimited 00:08:05.844 Command Set Identifier: NVM (00h) 00:08:05.844 Deallocate: Supported 00:08:05.844 Deallocated/Unwritten Error: Supported 00:08:05.844 Deallocated Read Value: All 0x00 00:08:05.844 Deallocate in Write Zeroes: Not Supported 00:08:05.844 Deallocated Guard Field: 0xFFFF 00:08:05.844 Flush: Supported 00:08:05.844 Reservation: Not Supported 00:08:05.844 Namespace Sharing Capabilities: Private 00:08:05.844 Size (in LBAs): 1048576 (4GiB) 00:08:05.844 Capacity (in LBAs): 1048576 (4GiB) 00:08:05.844 Utilization (in LBAs): 1048576 (4GiB) 00:08:05.844 Thin Provisioning: Not Supported 00:08:05.844 Per-NS Atomic Units: No 00:08:05.844 Maximum Single Source Range Length: 128 00:08:05.844 Maximum Copy Length: 128 00:08:05.844 Maximum Source Range Count: 128 00:08:05.844 NGUID/EUI64 Never Reused: No 00:08:05.844 Namespace Write Protected: No 00:08:05.844 Number of LBA Formats: 8 00:08:05.844 Current LBA Format: LBA Format #04 00:08:05.844 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:05.844 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:05.844 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:05.844 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:05.844 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:05.844 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:05.844 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:05.844 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:05.844 00:08:05.844 NVM Specific Namespace Data 00:08:05.844 =========================== 00:08:05.844 Logical Block Storage Tag Mask: 0 00:08:05.844 Protection Information Capabilities: 00:08:05.844 16b Guard Protection Information Storage Tag Support: No 00:08:05.844 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:05.845 Storage Tag Check Read Support: No 00:08:05.845 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Namespace ID:3 00:08:05.845 Error Recovery Timeout: Unlimited 00:08:05.845 Command Set Identifier: NVM (00h) 00:08:05.845 Deallocate: Supported 00:08:05.845 Deallocated/Unwritten Error: Supported 00:08:05.845 Deallocated Read Value: All 0x00 00:08:05.845 Deallocate in Write Zeroes: Not Supported 00:08:05.845 Deallocated Guard Field: 0xFFFF 00:08:05.845 Flush: Supported 00:08:05.845 Reservation: Not Supported 00:08:05.845 Namespace Sharing Capabilities: Private 00:08:05.845 Size (in LBAs): 1048576 (4GiB) 00:08:05.845 Capacity (in LBAs): 1048576 (4GiB) 00:08:05.845 Utilization (in LBAs): 1048576 (4GiB) 00:08:05.845 Thin Provisioning: Not Supported 00:08:05.845 Per-NS Atomic Units: No 00:08:05.845 Maximum Single Source Range Length: 128 00:08:05.845 Maximum Copy Length: 128 00:08:05.845 Maximum Source Range Count: 128 00:08:05.845 NGUID/EUI64 Never Reused: No 00:08:05.845 Namespace Write Protected: No 00:08:05.845 Number of LBA Formats: 8 00:08:05.845 Current LBA Format: LBA Format #04 00:08:05.845 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:05.845 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:05.845 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:05.845 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:05.845 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:05.845 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:05.845 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:05.845 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:05.845 00:08:05.845 NVM Specific Namespace Data 00:08:05.845 =========================== 00:08:05.845 Logical Block Storage Tag Mask: 0 00:08:05.845 Protection Information Capabilities: 00:08:05.845 16b Guard Protection Information Storage Tag Support: No 00:08:05.845 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:05.845 Storage Tag Check Read Support: No 00:08:05.845 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:05.845 03:34:58 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:08:05.845 03:34:58 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:08:06.104 ===================================================== 00:08:06.104 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:06.104 ===================================================== 00:08:06.104 Controller Capabilities/Features 00:08:06.104 ================================ 00:08:06.104 Vendor ID: 1b36 00:08:06.104 Subsystem Vendor ID: 1af4 00:08:06.104 Serial Number: 12343 00:08:06.104 Model Number: QEMU NVMe Ctrl 00:08:06.104 Firmware Version: 8.0.0 00:08:06.104 Recommended Arb Burst: 6 00:08:06.104 IEEE OUI Identifier: 00 54 52 00:08:06.104 Multi-path I/O 00:08:06.104 May have multiple subsystem ports: No 00:08:06.104 May have multiple controllers: Yes 00:08:06.104 Associated with SR-IOV VF: No 00:08:06.104 Max Data Transfer Size: 524288 00:08:06.104 Max Number of Namespaces: 256 00:08:06.104 Max Number of I/O Queues: 64 00:08:06.104 NVMe Specification Version (VS): 1.4 00:08:06.104 NVMe Specification Version (Identify): 1.4 00:08:06.104 Maximum Queue Entries: 2048 00:08:06.104 Contiguous Queues Required: Yes 00:08:06.104 Arbitration Mechanisms Supported 00:08:06.104 Weighted Round Robin: Not Supported 00:08:06.104 Vendor Specific: Not Supported 00:08:06.104 Reset Timeout: 7500 ms 00:08:06.104 Doorbell Stride: 4 bytes 00:08:06.104 NVM Subsystem Reset: Not Supported 00:08:06.104 Command Sets Supported 00:08:06.104 NVM Command Set: Supported 00:08:06.104 Boot Partition: Not Supported 00:08:06.104 Memory Page Size Minimum: 4096 bytes 00:08:06.104 Memory Page Size Maximum: 65536 bytes 00:08:06.104 Persistent Memory Region: Not Supported 00:08:06.104 Optional Asynchronous Events Supported 00:08:06.104 Namespace Attribute Notices: Supported 00:08:06.104 Firmware Activation Notices: Not Supported 00:08:06.104 ANA Change Notices: Not Supported 00:08:06.104 PLE Aggregate Log Change Notices: Not Supported 00:08:06.104 LBA Status Info Alert Notices: Not Supported 00:08:06.104 EGE Aggregate Log Change Notices: Not Supported 00:08:06.104 Normal NVM Subsystem Shutdown event: Not Supported 00:08:06.104 Zone Descriptor Change Notices: Not Supported 00:08:06.104 Discovery Log Change Notices: Not Supported 00:08:06.104 Controller Attributes 00:08:06.104 128-bit Host Identifier: Not Supported 00:08:06.104 Non-Operational Permissive Mode: Not Supported 00:08:06.104 NVM Sets: Not Supported 00:08:06.104 Read Recovery Levels: Not Supported 00:08:06.104 Endurance Groups: Supported 00:08:06.104 Predictable Latency Mode: Not Supported 00:08:06.104 Traffic Based Keep ALive: Not Supported 00:08:06.104 Namespace Granularity: Not Supported 00:08:06.104 SQ Associations: Not Supported 00:08:06.104 UUID List: Not Supported 00:08:06.104 Multi-Domain Subsystem: Not Supported 00:08:06.104 Fixed Capacity Management: Not Supported 00:08:06.104 Variable Capacity Management: Not Supported 00:08:06.104 Delete Endurance Group: Not Supported 00:08:06.104 Delete NVM Set: Not Supported 00:08:06.104 Extended LBA Formats Supported: Supported 00:08:06.104 Flexible Data Placement Supported: Supported 00:08:06.104 00:08:06.104 Controller Memory Buffer Support 00:08:06.104 ================================ 00:08:06.104 Supported: No 00:08:06.104 00:08:06.104 Persistent Memory Region Support 00:08:06.104 ================================ 00:08:06.104 Supported: No 00:08:06.104 00:08:06.104 Admin Command Set Attributes 00:08:06.104 ============================ 00:08:06.104 Security Send/Receive: Not Supported 00:08:06.104 Format NVM: Supported 00:08:06.104 Firmware Activate/Download: Not Supported 00:08:06.104 Namespace Management: Supported 00:08:06.104 Device Self-Test: Not Supported 00:08:06.104 Directives: Supported 00:08:06.104 NVMe-MI: Not Supported 00:08:06.104 Virtualization Management: Not Supported 00:08:06.104 Doorbell Buffer Config: Supported 00:08:06.104 Get LBA Status Capability: Not Supported 00:08:06.104 Command & Feature Lockdown Capability: Not Supported 00:08:06.104 Abort Command Limit: 4 00:08:06.104 Async Event Request Limit: 4 00:08:06.104 Number of Firmware Slots: N/A 00:08:06.104 Firmware Slot 1 Read-Only: N/A 00:08:06.104 Firmware Activation Without Reset: N/A 00:08:06.104 Multiple Update Detection Support: N/A 00:08:06.104 Firmware Update Granularity: No Information Provided 00:08:06.104 Per-Namespace SMART Log: Yes 00:08:06.104 Asymmetric Namespace Access Log Page: Not Supported 00:08:06.104 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:08:06.104 Command Effects Log Page: Supported 00:08:06.104 Get Log Page Extended Data: Supported 00:08:06.104 Telemetry Log Pages: Not Supported 00:08:06.104 Persistent Event Log Pages: Not Supported 00:08:06.104 Supported Log Pages Log Page: May Support 00:08:06.104 Commands Supported & Effects Log Page: Not Supported 00:08:06.104 Feature Identifiers & Effects Log Page:May Support 00:08:06.104 NVMe-MI Commands & Effects Log Page: May Support 00:08:06.104 Data Area 4 for Telemetry Log: Not Supported 00:08:06.104 Error Log Page Entries Supported: 1 00:08:06.104 Keep Alive: Not Supported 00:08:06.104 00:08:06.104 NVM Command Set Attributes 00:08:06.104 ========================== 00:08:06.104 Submission Queue Entry Size 00:08:06.104 Max: 64 00:08:06.104 Min: 64 00:08:06.104 Completion Queue Entry Size 00:08:06.104 Max: 16 00:08:06.104 Min: 16 00:08:06.104 Number of Namespaces: 256 00:08:06.104 Compare Command: Supported 00:08:06.104 Write Uncorrectable Command: Not Supported 00:08:06.104 Dataset Management Command: Supported 00:08:06.104 Write Zeroes Command: Supported 00:08:06.104 Set Features Save Field: Supported 00:08:06.104 Reservations: Not Supported 00:08:06.104 Timestamp: Supported 00:08:06.104 Copy: Supported 00:08:06.105 Volatile Write Cache: Present 00:08:06.105 Atomic Write Unit (Normal): 1 00:08:06.105 Atomic Write Unit (PFail): 1 00:08:06.105 Atomic Compare & Write Unit: 1 00:08:06.105 Fused Compare & Write: Not Supported 00:08:06.105 Scatter-Gather List 00:08:06.105 SGL Command Set: Supported 00:08:06.105 SGL Keyed: Not Supported 00:08:06.105 SGL Bit Bucket Descriptor: Not Supported 00:08:06.105 SGL Metadata Pointer: Not Supported 00:08:06.105 Oversized SGL: Not Supported 00:08:06.105 SGL Metadata Address: Not Supported 00:08:06.105 SGL Offset: Not Supported 00:08:06.105 Transport SGL Data Block: Not Supported 00:08:06.105 Replay Protected Memory Block: Not Supported 00:08:06.105 00:08:06.105 Firmware Slot Information 00:08:06.105 ========================= 00:08:06.105 Active slot: 1 00:08:06.105 Slot 1 Firmware Revision: 1.0 00:08:06.105 00:08:06.105 00:08:06.105 Commands Supported and Effects 00:08:06.105 ============================== 00:08:06.105 Admin Commands 00:08:06.105 -------------- 00:08:06.105 Delete I/O Submission Queue (00h): Supported 00:08:06.105 Create I/O Submission Queue (01h): Supported 00:08:06.105 Get Log Page (02h): Supported 00:08:06.105 Delete I/O Completion Queue (04h): Supported 00:08:06.105 Create I/O Completion Queue (05h): Supported 00:08:06.105 Identify (06h): Supported 00:08:06.105 Abort (08h): Supported 00:08:06.105 Set Features (09h): Supported 00:08:06.105 Get Features (0Ah): Supported 00:08:06.105 Asynchronous Event Request (0Ch): Supported 00:08:06.105 Namespace Attachment (15h): Supported NS-Inventory-Change 00:08:06.105 Directive Send (19h): Supported 00:08:06.105 Directive Receive (1Ah): Supported 00:08:06.105 Virtualization Management (1Ch): Supported 00:08:06.105 Doorbell Buffer Config (7Ch): Supported 00:08:06.105 Format NVM (80h): Supported LBA-Change 00:08:06.105 I/O Commands 00:08:06.105 ------------ 00:08:06.105 Flush (00h): Supported LBA-Change 00:08:06.105 Write (01h): Supported LBA-Change 00:08:06.105 Read (02h): Supported 00:08:06.105 Compare (05h): Supported 00:08:06.105 Write Zeroes (08h): Supported LBA-Change 00:08:06.105 Dataset Management (09h): Supported LBA-Change 00:08:06.105 Unknown (0Ch): Supported 00:08:06.105 Unknown (12h): Supported 00:08:06.105 Copy (19h): Supported LBA-Change 00:08:06.105 Unknown (1Dh): Supported LBA-Change 00:08:06.105 00:08:06.105 Error Log 00:08:06.105 ========= 00:08:06.105 00:08:06.105 Arbitration 00:08:06.105 =========== 00:08:06.105 Arbitration Burst: no limit 00:08:06.105 00:08:06.105 Power Management 00:08:06.105 ================ 00:08:06.105 Number of Power States: 1 00:08:06.105 Current Power State: Power State #0 00:08:06.105 Power State #0: 00:08:06.105 Max Power: 25.00 W 00:08:06.105 Non-Operational State: Operational 00:08:06.105 Entry Latency: 16 microseconds 00:08:06.105 Exit Latency: 4 microseconds 00:08:06.105 Relative Read Throughput: 0 00:08:06.105 Relative Read Latency: 0 00:08:06.105 Relative Write Throughput: 0 00:08:06.105 Relative Write Latency: 0 00:08:06.105 Idle Power: Not Reported 00:08:06.105 Active Power: Not Reported 00:08:06.105 Non-Operational Permissive Mode: Not Supported 00:08:06.105 00:08:06.105 Health Information 00:08:06.105 ================== 00:08:06.105 Critical Warnings: 00:08:06.105 Available Spare Space: OK 00:08:06.105 Temperature: OK 00:08:06.105 Device Reliability: OK 00:08:06.105 Read Only: No 00:08:06.105 Volatile Memory Backup: OK 00:08:06.105 Current Temperature: 323 Kelvin (50 Celsius) 00:08:06.105 Temperature Threshold: 343 Kelvin (70 Celsius) 00:08:06.105 Available Spare: 0% 00:08:06.105 Available Spare Threshold: 0% 00:08:06.105 Life Percentage Used: 0% 00:08:06.105 Data Units Read: 1065 00:08:06.105 Data Units Written: 994 00:08:06.105 Host Read Commands: 37377 00:08:06.105 Host Write Commands: 36800 00:08:06.105 Controller Busy Time: 0 minutes 00:08:06.105 Power Cycles: 0 00:08:06.105 Power On Hours: 0 hours 00:08:06.105 Unsafe Shutdowns: 0 00:08:06.105 Unrecoverable Media Errors: 0 00:08:06.105 Lifetime Error Log Entries: 0 00:08:06.105 Warning Temperature Time: 0 minutes 00:08:06.105 Critical Temperature Time: 0 minutes 00:08:06.105 00:08:06.105 Number of Queues 00:08:06.105 ================ 00:08:06.105 Number of I/O Submission Queues: 64 00:08:06.105 Number of I/O Completion Queues: 64 00:08:06.105 00:08:06.105 ZNS Specific Controller Data 00:08:06.105 ============================ 00:08:06.105 Zone Append Size Limit: 0 00:08:06.105 00:08:06.105 00:08:06.105 Active Namespaces 00:08:06.105 ================= 00:08:06.105 Namespace ID:1 00:08:06.105 Error Recovery Timeout: Unlimited 00:08:06.105 Command Set Identifier: NVM (00h) 00:08:06.105 Deallocate: Supported 00:08:06.105 Deallocated/Unwritten Error: Supported 00:08:06.105 Deallocated Read Value: All 0x00 00:08:06.105 Deallocate in Write Zeroes: Not Supported 00:08:06.105 Deallocated Guard Field: 0xFFFF 00:08:06.105 Flush: Supported 00:08:06.105 Reservation: Not Supported 00:08:06.105 Namespace Sharing Capabilities: Multiple Controllers 00:08:06.105 Size (in LBAs): 262144 (1GiB) 00:08:06.105 Capacity (in LBAs): 262144 (1GiB) 00:08:06.105 Utilization (in LBAs): 262144 (1GiB) 00:08:06.105 Thin Provisioning: Not Supported 00:08:06.105 Per-NS Atomic Units: No 00:08:06.105 Maximum Single Source Range Length: 128 00:08:06.105 Maximum Copy Length: 128 00:08:06.105 Maximum Source Range Count: 128 00:08:06.105 NGUID/EUI64 Never Reused: No 00:08:06.105 Namespace Write Protected: No 00:08:06.105 Endurance group ID: 1 00:08:06.105 Number of LBA Formats: 8 00:08:06.105 Current LBA Format: LBA Format #04 00:08:06.105 LBA Format #00: Data Size: 512 Metadata Size: 0 00:08:06.105 LBA Format #01: Data Size: 512 Metadata Size: 8 00:08:06.105 LBA Format #02: Data Size: 512 Metadata Size: 16 00:08:06.105 LBA Format #03: Data Size: 512 Metadata Size: 64 00:08:06.105 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:08:06.105 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:08:06.105 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:08:06.105 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:08:06.105 00:08:06.105 Get Feature FDP: 00:08:06.105 ================ 00:08:06.105 Enabled: Yes 00:08:06.105 FDP configuration index: 0 00:08:06.105 00:08:06.105 FDP configurations log page 00:08:06.105 =========================== 00:08:06.105 Number of FDP configurations: 1 00:08:06.105 Version: 0 00:08:06.105 Size: 112 00:08:06.105 FDP Configuration Descriptor: 0 00:08:06.105 Descriptor Size: 96 00:08:06.105 Reclaim Group Identifier format: 2 00:08:06.105 FDP Volatile Write Cache: Not Present 00:08:06.105 FDP Configuration: Valid 00:08:06.105 Vendor Specific Size: 0 00:08:06.105 Number of Reclaim Groups: 2 00:08:06.105 Number of Recalim Unit Handles: 8 00:08:06.105 Max Placement Identifiers: 128 00:08:06.105 Number of Namespaces Suppprted: 256 00:08:06.105 Reclaim unit Nominal Size: 6000000 bytes 00:08:06.105 Estimated Reclaim Unit Time Limit: Not Reported 00:08:06.105 RUH Desc #000: RUH Type: Initially Isolated 00:08:06.105 RUH Desc #001: RUH Type: Initially Isolated 00:08:06.105 RUH Desc #002: RUH Type: Initially Isolated 00:08:06.105 RUH Desc #003: RUH Type: Initially Isolated 00:08:06.105 RUH Desc #004: RUH Type: Initially Isolated 00:08:06.105 RUH Desc #005: RUH Type: Initially Isolated 00:08:06.105 RUH Desc #006: RUH Type: Initially Isolated 00:08:06.105 RUH Desc #007: RUH Type: Initially Isolated 00:08:06.105 00:08:06.105 FDP reclaim unit handle usage log page 00:08:06.105 ====================================== 00:08:06.105 Number of Reclaim Unit Handles: 8 00:08:06.105 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:08:06.105 RUH Usage Desc #001: RUH Attributes: Unused 00:08:06.105 RUH Usage Desc #002: RUH Attributes: Unused 00:08:06.105 RUH Usage Desc #003: RUH Attributes: Unused 00:08:06.105 RUH Usage Desc #004: RUH Attributes: Unused 00:08:06.105 RUH Usage Desc #005: RUH Attributes: Unused 00:08:06.105 RUH Usage Desc #006: RUH Attributes: Unused 00:08:06.105 RUH Usage Desc #007: RUH Attributes: Unused 00:08:06.105 00:08:06.105 FDP statistics log page 00:08:06.105 ======================= 00:08:06.105 Host bytes with metadata written: 610902016 00:08:06.105 Media bytes with metadata written: 610983936 00:08:06.105 Media bytes erased: 0 00:08:06.105 00:08:06.105 FDP events log page 00:08:06.105 =================== 00:08:06.105 Number of FDP events: 0 00:08:06.105 00:08:06.105 NVM Specific Namespace Data 00:08:06.105 =========================== 00:08:06.105 Logical Block Storage Tag Mask: 0 00:08:06.105 Protection Information Capabilities: 00:08:06.105 16b Guard Protection Information Storage Tag Support: No 00:08:06.105 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:08:06.105 Storage Tag Check Read Support: No 00:08:06.105 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:06.105 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:06.105 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:06.106 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:06.106 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:06.106 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:06.106 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:06.106 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:08:06.106 ************************************ 00:08:06.106 END TEST nvme_identify 00:08:06.106 ************************************ 00:08:06.106 00:08:06.106 real 0m1.188s 00:08:06.106 user 0m0.380s 00:08:06.106 sys 0m0.548s 00:08:06.106 03:34:58 nvme.nvme_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:06.106 03:34:58 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:08:06.106 03:34:58 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:08:06.106 03:34:58 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:06.106 03:34:58 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:06.106 03:34:58 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:06.106 ************************************ 00:08:06.106 START TEST nvme_perf 00:08:06.106 ************************************ 00:08:06.106 03:34:58 nvme.nvme_perf -- common/autotest_common.sh@1125 -- # nvme_perf 00:08:06.106 03:34:58 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:08:07.484 Initializing NVMe Controllers 00:08:07.484 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:07.484 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:07.484 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:07.484 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:07.484 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:07.484 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:07.484 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:07.484 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:07.484 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:07.484 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:07.484 Initialization complete. Launching workers. 00:08:07.484 ======================================================== 00:08:07.484 Latency(us) 00:08:07.484 Device Information : IOPS MiB/s Average min max 00:08:07.484 PCIE (0000:00:10.0) NSID 1 from core 0: 17456.50 204.57 7344.63 5771.18 30451.86 00:08:07.484 PCIE (0000:00:11.0) NSID 1 from core 0: 17456.50 204.57 7335.39 5928.73 28892.36 00:08:07.484 PCIE (0000:00:13.0) NSID 1 from core 0: 17456.50 204.57 7324.96 5861.78 27517.36 00:08:07.484 PCIE (0000:00:12.0) NSID 1 from core 0: 17456.50 204.57 7314.38 5901.31 25884.01 00:08:07.484 PCIE (0000:00:12.0) NSID 2 from core 0: 17456.50 204.57 7303.91 5904.90 24306.85 00:08:07.484 PCIE (0000:00:12.0) NSID 3 from core 0: 17456.50 204.57 7293.54 5892.56 22732.24 00:08:07.484 ======================================================== 00:08:07.484 Total : 104738.99 1227.41 7319.47 5771.18 30451.86 00:08:07.484 00:08:07.484 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:07.484 ================================================================================= 00:08:07.484 1.00000% : 6175.508us 00:08:07.484 10.00000% : 6377.157us 00:08:07.484 25.00000% : 6604.012us 00:08:07.484 50.00000% : 6906.486us 00:08:07.484 75.00000% : 7208.960us 00:08:07.484 90.00000% : 8570.092us 00:08:07.484 95.00000% : 10284.111us 00:08:07.484 98.00000% : 13107.200us 00:08:07.484 99.00000% : 14720.394us 00:08:07.484 99.50000% : 23895.434us 00:08:07.484 99.90000% : 30045.735us 00:08:07.484 99.99000% : 30449.034us 00:08:07.484 99.99900% : 30650.683us 00:08:07.484 99.99990% : 30650.683us 00:08:07.484 99.99999% : 30650.683us 00:08:07.484 00:08:07.484 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:07.484 ================================================================================= 00:08:07.484 1.00000% : 6276.332us 00:08:07.484 10.00000% : 6427.569us 00:08:07.484 25.00000% : 6604.012us 00:08:07.484 50.00000% : 6906.486us 00:08:07.484 75.00000% : 7158.548us 00:08:07.484 90.00000% : 8570.092us 00:08:07.484 95.00000% : 10334.523us 00:08:07.484 98.00000% : 12855.138us 00:08:07.484 99.00000% : 15022.868us 00:08:07.484 99.50000% : 22786.363us 00:08:07.484 99.90000% : 28634.191us 00:08:07.484 99.99000% : 29037.489us 00:08:07.484 99.99900% : 29037.489us 00:08:07.484 99.99990% : 29037.489us 00:08:07.484 99.99999% : 29037.489us 00:08:07.484 00:08:07.484 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:07.484 ================================================================================= 00:08:07.484 1.00000% : 6251.126us 00:08:07.484 10.00000% : 6427.569us 00:08:07.484 25.00000% : 6604.012us 00:08:07.484 50.00000% : 6906.486us 00:08:07.484 75.00000% : 7158.548us 00:08:07.484 90.00000% : 8620.505us 00:08:07.484 95.00000% : 10183.286us 00:08:07.484 98.00000% : 12552.665us 00:08:07.485 99.00000% : 15123.692us 00:08:07.485 99.50000% : 21273.994us 00:08:07.485 99.90000% : 27222.646us 00:08:07.485 99.99000% : 27625.945us 00:08:07.485 99.99900% : 27625.945us 00:08:07.485 99.99990% : 27625.945us 00:08:07.485 99.99999% : 27625.945us 00:08:07.485 00:08:07.485 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:07.485 ================================================================================= 00:08:07.485 1.00000% : 6251.126us 00:08:07.485 10.00000% : 6427.569us 00:08:07.485 25.00000% : 6604.012us 00:08:07.485 50.00000% : 6906.486us 00:08:07.485 75.00000% : 7158.548us 00:08:07.485 90.00000% : 8721.329us 00:08:07.485 95.00000% : 10132.874us 00:08:07.485 98.00000% : 12603.077us 00:08:07.485 99.00000% : 14417.920us 00:08:07.485 99.50000% : 19660.800us 00:08:07.485 99.90000% : 25508.628us 00:08:07.485 99.99000% : 26012.751us 00:08:07.485 99.99900% : 26012.751us 00:08:07.485 99.99990% : 26012.751us 00:08:07.485 99.99999% : 26012.751us 00:08:07.485 00:08:07.485 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:07.485 ================================================================================= 00:08:07.485 1.00000% : 6251.126us 00:08:07.485 10.00000% : 6427.569us 00:08:07.485 25.00000% : 6604.012us 00:08:07.485 50.00000% : 6906.486us 00:08:07.485 75.00000% : 7158.548us 00:08:07.485 90.00000% : 8620.505us 00:08:07.485 95.00000% : 10132.874us 00:08:07.485 98.00000% : 13006.375us 00:08:07.485 99.00000% : 14115.446us 00:08:07.485 99.50000% : 18148.431us 00:08:07.485 99.90000% : 23895.434us 00:08:07.485 99.99000% : 24298.732us 00:08:07.485 99.99900% : 24399.557us 00:08:07.485 99.99990% : 24399.557us 00:08:07.485 99.99999% : 24399.557us 00:08:07.485 00:08:07.485 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:07.485 ================================================================================= 00:08:07.485 1.00000% : 6251.126us 00:08:07.485 10.00000% : 6427.569us 00:08:07.485 25.00000% : 6604.012us 00:08:07.485 50.00000% : 6906.486us 00:08:07.485 75.00000% : 7158.548us 00:08:07.485 90.00000% : 8620.505us 00:08:07.485 95.00000% : 10284.111us 00:08:07.485 98.00000% : 13308.849us 00:08:07.485 99.00000% : 14518.745us 00:08:07.485 99.50000% : 16636.062us 00:08:07.485 99.90000% : 22282.240us 00:08:07.485 99.99000% : 22786.363us 00:08:07.485 99.99900% : 22786.363us 00:08:07.485 99.99990% : 22786.363us 00:08:07.485 99.99999% : 22786.363us 00:08:07.485 00:08:07.485 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:07.485 ============================================================================== 00:08:07.485 Range in us Cumulative IO count 00:08:07.485 5747.003 - 5772.209: 0.0057% ( 1) 00:08:07.485 5772.209 - 5797.415: 0.0114% ( 1) 00:08:07.485 5797.415 - 5822.622: 0.0630% ( 9) 00:08:07.485 5822.622 - 5847.828: 0.0916% ( 5) 00:08:07.485 5847.828 - 5873.034: 0.1030% ( 2) 00:08:07.485 5873.034 - 5898.240: 0.1202% ( 3) 00:08:07.485 5898.240 - 5923.446: 0.1488% ( 5) 00:08:07.485 5923.446 - 5948.652: 0.1832% ( 6) 00:08:07.485 5948.652 - 5973.858: 0.2003% ( 3) 00:08:07.485 5973.858 - 5999.065: 0.2232% ( 4) 00:08:07.485 5999.065 - 6024.271: 0.2633% ( 7) 00:08:07.485 6024.271 - 6049.477: 0.3262% ( 11) 00:08:07.485 6049.477 - 6074.683: 0.3663% ( 7) 00:08:07.485 6074.683 - 6099.889: 0.4293% ( 11) 00:08:07.485 6099.889 - 6125.095: 0.5552% ( 22) 00:08:07.485 6125.095 - 6150.302: 0.8013% ( 43) 00:08:07.485 6150.302 - 6175.508: 1.2935% ( 86) 00:08:07.485 6175.508 - 6200.714: 1.9631% ( 117) 00:08:07.485 6200.714 - 6225.920: 2.7701% ( 141) 00:08:07.485 6225.920 - 6251.126: 3.8404% ( 187) 00:08:07.485 6251.126 - 6276.332: 4.9565% ( 195) 00:08:07.485 6276.332 - 6301.538: 6.4332% ( 258) 00:08:07.485 6301.538 - 6326.745: 7.9957% ( 273) 00:08:07.485 6326.745 - 6351.951: 9.8157% ( 318) 00:08:07.485 6351.951 - 6377.157: 11.5213% ( 298) 00:08:07.485 6377.157 - 6402.363: 13.3127% ( 313) 00:08:07.485 6402.363 - 6427.569: 15.0813% ( 309) 00:08:07.485 6427.569 - 6452.775: 16.9757% ( 331) 00:08:07.485 6452.775 - 6503.188: 20.5987% ( 633) 00:08:07.485 6503.188 - 6553.600: 24.3533% ( 656) 00:08:07.485 6553.600 - 6604.012: 28.2223% ( 676) 00:08:07.485 6604.012 - 6654.425: 32.0284% ( 665) 00:08:07.485 6654.425 - 6704.837: 35.9661% ( 688) 00:08:07.485 6704.837 - 6755.249: 39.9783% ( 701) 00:08:07.485 6755.249 - 6805.662: 44.0591% ( 713) 00:08:07.485 6805.662 - 6856.074: 48.0197% ( 692) 00:08:07.485 6856.074 - 6906.486: 52.0147% ( 698) 00:08:07.485 6906.486 - 6956.898: 56.2214% ( 735) 00:08:07.485 6956.898 - 7007.311: 60.3880% ( 728) 00:08:07.485 7007.311 - 7057.723: 64.3601% ( 694) 00:08:07.485 7057.723 - 7108.135: 68.4295% ( 711) 00:08:07.485 7108.135 - 7158.548: 72.2585% ( 669) 00:08:07.485 7158.548 - 7208.960: 75.6067% ( 585) 00:08:07.485 7208.960 - 7259.372: 78.3139% ( 473) 00:08:07.485 7259.372 - 7309.785: 80.3457% ( 355) 00:08:07.485 7309.785 - 7360.197: 81.5304% ( 207) 00:08:07.485 7360.197 - 7410.609: 82.5263% ( 174) 00:08:07.485 7410.609 - 7461.022: 83.2360% ( 124) 00:08:07.485 7461.022 - 7511.434: 83.9400% ( 123) 00:08:07.485 7511.434 - 7561.846: 84.4609% ( 91) 00:08:07.485 7561.846 - 7612.258: 84.9702% ( 89) 00:08:07.485 7612.258 - 7662.671: 85.3938% ( 74) 00:08:07.485 7662.671 - 7713.083: 85.7944% ( 70) 00:08:07.485 7713.083 - 7763.495: 86.1664% ( 65) 00:08:07.485 7763.495 - 7813.908: 86.5098% ( 60) 00:08:07.485 7813.908 - 7864.320: 86.9448% ( 76) 00:08:07.485 7864.320 - 7914.732: 87.2711% ( 57) 00:08:07.485 7914.732 - 7965.145: 87.5916% ( 56) 00:08:07.485 7965.145 - 8015.557: 87.8549% ( 46) 00:08:07.485 8015.557 - 8065.969: 88.1067% ( 44) 00:08:07.485 8065.969 - 8116.382: 88.3356% ( 40) 00:08:07.485 8116.382 - 8166.794: 88.5302% ( 34) 00:08:07.485 8166.794 - 8217.206: 88.7248% ( 34) 00:08:07.485 8217.206 - 8267.618: 88.8965% ( 30) 00:08:07.485 8267.618 - 8318.031: 89.1026% ( 36) 00:08:07.485 8318.031 - 8368.443: 89.2628% ( 28) 00:08:07.485 8368.443 - 8418.855: 89.4345% ( 30) 00:08:07.485 8418.855 - 8469.268: 89.6291% ( 34) 00:08:07.485 8469.268 - 8519.680: 89.8581% ( 40) 00:08:07.485 8519.680 - 8570.092: 90.0298% ( 30) 00:08:07.485 8570.092 - 8620.505: 90.1843% ( 27) 00:08:07.485 8620.505 - 8670.917: 90.3388% ( 27) 00:08:07.485 8670.917 - 8721.329: 90.5163% ( 31) 00:08:07.485 8721.329 - 8771.742: 90.6593% ( 25) 00:08:07.485 8771.742 - 8822.154: 90.8425% ( 32) 00:08:07.485 8822.154 - 8872.566: 91.0085% ( 29) 00:08:07.485 8872.566 - 8922.978: 91.1916% ( 32) 00:08:07.485 8922.978 - 8973.391: 91.3862% ( 34) 00:08:07.485 8973.391 - 9023.803: 91.5465% ( 28) 00:08:07.485 9023.803 - 9074.215: 91.7468% ( 35) 00:08:07.485 9074.215 - 9124.628: 91.9357% ( 33) 00:08:07.485 9124.628 - 9175.040: 92.1360% ( 35) 00:08:07.485 9175.040 - 9225.452: 92.3191% ( 32) 00:08:07.485 9225.452 - 9275.865: 92.5309% ( 37) 00:08:07.485 9275.865 - 9326.277: 92.7083% ( 31) 00:08:07.485 9326.277 - 9376.689: 92.8686% ( 28) 00:08:07.485 9376.689 - 9427.102: 93.0460% ( 31) 00:08:07.485 9427.102 - 9477.514: 93.2005% ( 27) 00:08:07.485 9477.514 - 9527.926: 93.3436% ( 25) 00:08:07.485 9527.926 - 9578.338: 93.4295% ( 15) 00:08:07.485 9578.338 - 9628.751: 93.5783% ( 26) 00:08:07.485 9628.751 - 9679.163: 93.6870% ( 19) 00:08:07.485 9679.163 - 9729.575: 93.8301% ( 25) 00:08:07.485 9729.575 - 9779.988: 93.9446% ( 20) 00:08:07.485 9779.988 - 9830.400: 94.0533% ( 19) 00:08:07.485 9830.400 - 9880.812: 94.1678% ( 20) 00:08:07.485 9880.812 - 9931.225: 94.2422% ( 13) 00:08:07.485 9931.225 - 9981.637: 94.3739% ( 23) 00:08:07.485 9981.637 - 10032.049: 94.4654% ( 16) 00:08:07.485 10032.049 - 10082.462: 94.5742% ( 19) 00:08:07.485 10082.462 - 10132.874: 94.6886% ( 20) 00:08:07.485 10132.874 - 10183.286: 94.8375% ( 26) 00:08:07.485 10183.286 - 10233.698: 94.9176% ( 14) 00:08:07.485 10233.698 - 10284.111: 95.0263% ( 19) 00:08:07.485 10284.111 - 10334.523: 95.1179% ( 16) 00:08:07.485 10334.523 - 10384.935: 95.2266% ( 19) 00:08:07.485 10384.935 - 10435.348: 95.3239% ( 17) 00:08:07.485 10435.348 - 10485.760: 95.4212% ( 17) 00:08:07.485 10485.760 - 10536.172: 95.5243% ( 18) 00:08:07.485 10536.172 - 10586.585: 95.6559% ( 23) 00:08:07.485 10586.585 - 10636.997: 95.7589% ( 18) 00:08:07.485 10636.997 - 10687.409: 95.8620% ( 18) 00:08:07.485 10687.409 - 10737.822: 95.9478% ( 15) 00:08:07.485 10737.822 - 10788.234: 96.0451% ( 17) 00:08:07.485 10788.234 - 10838.646: 96.1252% ( 14) 00:08:07.485 10838.646 - 10889.058: 96.2225% ( 17) 00:08:07.485 10889.058 - 10939.471: 96.3198% ( 17) 00:08:07.485 10939.471 - 10989.883: 96.4171% ( 17) 00:08:07.485 10989.883 - 11040.295: 96.5259% ( 19) 00:08:07.485 11040.295 - 11090.708: 96.6174% ( 16) 00:08:07.485 11090.708 - 11141.120: 96.6976% ( 14) 00:08:07.485 11141.120 - 11191.532: 96.7663% ( 12) 00:08:07.485 11191.532 - 11241.945: 96.8521% ( 15) 00:08:07.485 11241.945 - 11292.357: 96.9036% ( 9) 00:08:07.485 11292.357 - 11342.769: 96.9494% ( 8) 00:08:07.485 11342.769 - 11393.182: 97.0066% ( 10) 00:08:07.485 11393.182 - 11443.594: 97.0410% ( 6) 00:08:07.485 11443.594 - 11494.006: 97.0925% ( 9) 00:08:07.485 11494.006 - 11544.418: 97.1497% ( 10) 00:08:07.485 11544.418 - 11594.831: 97.1669% ( 3) 00:08:07.485 11594.831 - 11645.243: 97.2012% ( 6) 00:08:07.485 11645.243 - 11695.655: 97.2413% ( 7) 00:08:07.485 11695.655 - 11746.068: 97.2642% ( 4) 00:08:07.486 11746.068 - 11796.480: 97.2928% ( 5) 00:08:07.486 11796.480 - 11846.892: 97.3272% ( 6) 00:08:07.486 11846.892 - 11897.305: 97.3615% ( 6) 00:08:07.486 11897.305 - 11947.717: 97.3958% ( 6) 00:08:07.486 11947.717 - 11998.129: 97.4359% ( 7) 00:08:07.486 11998.129 - 12048.542: 97.4588% ( 4) 00:08:07.486 12048.542 - 12098.954: 97.4989% ( 7) 00:08:07.486 12098.954 - 12149.366: 97.5332% ( 6) 00:08:07.486 12149.366 - 12199.778: 97.5504% ( 3) 00:08:07.486 12199.778 - 12250.191: 97.5790% ( 5) 00:08:07.486 12250.191 - 12300.603: 97.5962% ( 3) 00:08:07.486 12300.603 - 12351.015: 97.6133% ( 3) 00:08:07.486 12351.015 - 12401.428: 97.6362% ( 4) 00:08:07.486 12401.428 - 12451.840: 97.6477% ( 2) 00:08:07.486 12451.840 - 12502.252: 97.6648% ( 3) 00:08:07.486 12502.252 - 12552.665: 97.6763% ( 2) 00:08:07.486 12552.665 - 12603.077: 97.6820% ( 1) 00:08:07.486 12603.077 - 12653.489: 97.6935% ( 2) 00:08:07.486 12653.489 - 12703.902: 97.7049% ( 2) 00:08:07.486 12703.902 - 12754.314: 97.7106% ( 1) 00:08:07.486 12754.314 - 12804.726: 97.7450% ( 6) 00:08:07.486 12804.726 - 12855.138: 97.7908% ( 8) 00:08:07.486 12855.138 - 12905.551: 97.8308% ( 7) 00:08:07.486 12905.551 - 13006.375: 97.9109% ( 14) 00:08:07.486 13006.375 - 13107.200: 98.0025% ( 16) 00:08:07.486 13107.200 - 13208.025: 98.1113% ( 19) 00:08:07.486 13208.025 - 13308.849: 98.2028% ( 16) 00:08:07.486 13308.849 - 13409.674: 98.2830% ( 14) 00:08:07.486 13409.674 - 13510.498: 98.3745% ( 16) 00:08:07.486 13510.498 - 13611.323: 98.4432% ( 12) 00:08:07.486 13611.323 - 13712.148: 98.5348% ( 16) 00:08:07.486 13712.148 - 13812.972: 98.6321% ( 17) 00:08:07.486 13812.972 - 13913.797: 98.7008% ( 12) 00:08:07.486 13913.797 - 14014.622: 98.7523% ( 9) 00:08:07.486 14014.622 - 14115.446: 98.7924% ( 7) 00:08:07.486 14115.446 - 14216.271: 98.8324% ( 7) 00:08:07.486 14216.271 - 14317.095: 98.8610% ( 5) 00:08:07.486 14317.095 - 14417.920: 98.9068% ( 8) 00:08:07.486 14417.920 - 14518.745: 98.9526% ( 8) 00:08:07.486 14518.745 - 14619.569: 98.9870% ( 6) 00:08:07.486 14619.569 - 14720.394: 99.0270% ( 7) 00:08:07.486 14720.394 - 14821.218: 99.0556% ( 5) 00:08:07.486 14821.218 - 14922.043: 99.0900% ( 6) 00:08:07.486 14922.043 - 15022.868: 99.1243% ( 6) 00:08:07.486 15022.868 - 15123.692: 99.1415% ( 3) 00:08:07.486 15123.692 - 15224.517: 99.1529% ( 2) 00:08:07.486 15224.517 - 15325.342: 99.1758% ( 4) 00:08:07.486 15325.342 - 15426.166: 99.1873% ( 2) 00:08:07.486 15426.166 - 15526.991: 99.2216% ( 6) 00:08:07.486 15526.991 - 15627.815: 99.2674% ( 8) 00:08:07.486 22584.714 - 22685.538: 99.2731% ( 1) 00:08:07.486 22685.538 - 22786.363: 99.2846% ( 2) 00:08:07.486 22786.363 - 22887.188: 99.3075% ( 4) 00:08:07.486 22887.188 - 22988.012: 99.3246% ( 3) 00:08:07.486 22988.012 - 23088.837: 99.3418% ( 3) 00:08:07.486 23088.837 - 23189.662: 99.3647% ( 4) 00:08:07.486 23189.662 - 23290.486: 99.3819% ( 3) 00:08:07.486 23290.486 - 23391.311: 99.4048% ( 4) 00:08:07.486 23391.311 - 23492.135: 99.4219% ( 3) 00:08:07.486 23492.135 - 23592.960: 99.4391% ( 3) 00:08:07.486 23592.960 - 23693.785: 99.4620% ( 4) 00:08:07.486 23693.785 - 23794.609: 99.4792% ( 3) 00:08:07.486 23794.609 - 23895.434: 99.5021% ( 4) 00:08:07.486 23895.434 - 23996.258: 99.5192% ( 3) 00:08:07.486 23996.258 - 24097.083: 99.5421% ( 4) 00:08:07.486 24097.083 - 24197.908: 99.5593% ( 3) 00:08:07.486 24197.908 - 24298.732: 99.5765% ( 3) 00:08:07.486 24298.732 - 24399.557: 99.5994% ( 4) 00:08:07.486 24399.557 - 24500.382: 99.6165% ( 3) 00:08:07.486 24500.382 - 24601.206: 99.6337% ( 3) 00:08:07.486 28432.542 - 28634.191: 99.6509% ( 3) 00:08:07.486 28634.191 - 28835.840: 99.6852% ( 6) 00:08:07.486 28835.840 - 29037.489: 99.7196% ( 6) 00:08:07.486 29037.489 - 29239.138: 99.7653% ( 8) 00:08:07.486 29239.138 - 29440.788: 99.8054% ( 7) 00:08:07.486 29440.788 - 29642.437: 99.8455% ( 7) 00:08:07.486 29642.437 - 29844.086: 99.8855% ( 7) 00:08:07.486 29844.086 - 30045.735: 99.9199% ( 6) 00:08:07.486 30045.735 - 30247.385: 99.9599% ( 7) 00:08:07.486 30247.385 - 30449.034: 99.9943% ( 6) 00:08:07.486 30449.034 - 30650.683: 100.0000% ( 1) 00:08:07.486 00:08:07.486 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:07.486 ============================================================================== 00:08:07.486 Range in us Cumulative IO count 00:08:07.486 5923.446 - 5948.652: 0.0114% ( 2) 00:08:07.486 5948.652 - 5973.858: 0.0229% ( 2) 00:08:07.486 5973.858 - 5999.065: 0.0458% ( 4) 00:08:07.486 5999.065 - 6024.271: 0.1145% ( 12) 00:08:07.486 6024.271 - 6049.477: 0.1832% ( 12) 00:08:07.486 6049.477 - 6074.683: 0.2690% ( 15) 00:08:07.486 6074.683 - 6099.889: 0.3320% ( 11) 00:08:07.486 6099.889 - 6125.095: 0.3606% ( 5) 00:08:07.486 6125.095 - 6150.302: 0.4178% ( 10) 00:08:07.486 6150.302 - 6175.508: 0.4808% ( 11) 00:08:07.486 6175.508 - 6200.714: 0.5838% ( 18) 00:08:07.486 6200.714 - 6225.920: 0.7440% ( 28) 00:08:07.486 6225.920 - 6251.126: 0.9615% ( 38) 00:08:07.486 6251.126 - 6276.332: 1.6827% ( 126) 00:08:07.486 6276.332 - 6301.538: 2.7816% ( 192) 00:08:07.486 6301.538 - 6326.745: 4.0865% ( 228) 00:08:07.486 6326.745 - 6351.951: 5.5060% ( 248) 00:08:07.486 6351.951 - 6377.157: 6.9940% ( 260) 00:08:07.486 6377.157 - 6402.363: 8.8542% ( 325) 00:08:07.486 6402.363 - 6427.569: 10.7315% ( 328) 00:08:07.486 6427.569 - 6452.775: 12.7747% ( 357) 00:08:07.486 6452.775 - 6503.188: 17.3878% ( 806) 00:08:07.486 6503.188 - 6553.600: 21.9208% ( 792) 00:08:07.486 6553.600 - 6604.012: 26.2477% ( 756) 00:08:07.486 6604.012 - 6654.425: 30.5746% ( 756) 00:08:07.486 6654.425 - 6704.837: 35.0847% ( 788) 00:08:07.486 6704.837 - 6755.249: 39.7035% ( 807) 00:08:07.486 6755.249 - 6805.662: 44.4139% ( 823) 00:08:07.486 6805.662 - 6856.074: 49.0270% ( 806) 00:08:07.486 6856.074 - 6906.486: 53.8805% ( 848) 00:08:07.486 6906.486 - 6956.898: 58.6538% ( 834) 00:08:07.486 6956.898 - 7007.311: 63.2555% ( 804) 00:08:07.486 7007.311 - 7057.723: 67.7713% ( 789) 00:08:07.486 7057.723 - 7108.135: 71.8864% ( 719) 00:08:07.486 7108.135 - 7158.548: 75.3434% ( 604) 00:08:07.486 7158.548 - 7208.960: 77.9533% ( 456) 00:08:07.486 7208.960 - 7259.372: 79.7104% ( 307) 00:08:07.486 7259.372 - 7309.785: 80.9867% ( 223) 00:08:07.486 7309.785 - 7360.197: 81.9196% ( 163) 00:08:07.486 7360.197 - 7410.609: 82.6866% ( 134) 00:08:07.486 7410.609 - 7461.022: 83.3677% ( 119) 00:08:07.486 7461.022 - 7511.434: 83.8942% ( 92) 00:08:07.486 7511.434 - 7561.846: 84.4437% ( 96) 00:08:07.486 7561.846 - 7612.258: 84.9645% ( 91) 00:08:07.486 7612.258 - 7662.671: 85.3766% ( 72) 00:08:07.486 7662.671 - 7713.083: 85.8059% ( 75) 00:08:07.486 7713.083 - 7763.495: 86.2008% ( 69) 00:08:07.486 7763.495 - 7813.908: 86.5270% ( 57) 00:08:07.486 7813.908 - 7864.320: 86.8304% ( 53) 00:08:07.486 7864.320 - 7914.732: 87.1509% ( 56) 00:08:07.486 7914.732 - 7965.145: 87.4027% ( 44) 00:08:07.486 7965.145 - 8015.557: 87.6889% ( 50) 00:08:07.486 8015.557 - 8065.969: 87.9407% ( 44) 00:08:07.486 8065.969 - 8116.382: 88.2612% ( 56) 00:08:07.486 8116.382 - 8166.794: 88.5531% ( 51) 00:08:07.486 8166.794 - 8217.206: 88.8049% ( 44) 00:08:07.486 8217.206 - 8267.618: 89.0568% ( 44) 00:08:07.486 8267.618 - 8318.031: 89.3315% ( 48) 00:08:07.486 8318.031 - 8368.443: 89.5089% ( 31) 00:08:07.486 8368.443 - 8418.855: 89.6692% ( 28) 00:08:07.486 8418.855 - 8469.268: 89.8123% ( 25) 00:08:07.486 8469.268 - 8519.680: 89.9382% ( 22) 00:08:07.486 8519.680 - 8570.092: 90.0698% ( 23) 00:08:07.486 8570.092 - 8620.505: 90.2301% ( 28) 00:08:07.486 8620.505 - 8670.917: 90.4361% ( 36) 00:08:07.486 8670.917 - 8721.329: 90.6765% ( 42) 00:08:07.486 8721.329 - 8771.742: 90.8997% ( 39) 00:08:07.486 8771.742 - 8822.154: 91.0714% ( 30) 00:08:07.486 8822.154 - 8872.566: 91.2546% ( 32) 00:08:07.486 8872.566 - 8922.978: 91.4148% ( 28) 00:08:07.486 8922.978 - 8973.391: 91.5865% ( 30) 00:08:07.486 8973.391 - 9023.803: 91.7525% ( 29) 00:08:07.486 9023.803 - 9074.215: 91.9128% ( 28) 00:08:07.486 9074.215 - 9124.628: 92.0501% ( 24) 00:08:07.486 9124.628 - 9175.040: 92.2562% ( 36) 00:08:07.486 9175.040 - 9225.452: 92.4164% ( 28) 00:08:07.486 9225.452 - 9275.865: 92.5939% ( 31) 00:08:07.486 9275.865 - 9326.277: 92.7713% ( 31) 00:08:07.486 9326.277 - 9376.689: 92.9487% ( 31) 00:08:07.486 9376.689 - 9427.102: 93.0975% ( 26) 00:08:07.486 9427.102 - 9477.514: 93.2463% ( 26) 00:08:07.486 9477.514 - 9527.926: 93.3780% ( 23) 00:08:07.486 9527.926 - 9578.338: 93.5039% ( 22) 00:08:07.486 9578.338 - 9628.751: 93.6298% ( 22) 00:08:07.486 9628.751 - 9679.163: 93.7500% ( 21) 00:08:07.486 9679.163 - 9729.575: 93.8874% ( 24) 00:08:07.486 9729.575 - 9779.988: 94.0190% ( 23) 00:08:07.486 9779.988 - 9830.400: 94.1277% ( 19) 00:08:07.486 9830.400 - 9880.812: 94.2422% ( 20) 00:08:07.486 9880.812 - 9931.225: 94.3624% ( 21) 00:08:07.486 9931.225 - 9981.637: 94.4826% ( 21) 00:08:07.486 9981.637 - 10032.049: 94.5971% ( 20) 00:08:07.486 10032.049 - 10082.462: 94.6715% ( 13) 00:08:07.486 10082.462 - 10132.874: 94.7859% ( 20) 00:08:07.486 10132.874 - 10183.286: 94.8775% ( 16) 00:08:07.486 10183.286 - 10233.698: 94.9290% ( 9) 00:08:07.486 10233.698 - 10284.111: 94.9920% ( 11) 00:08:07.486 10284.111 - 10334.523: 95.0492% ( 10) 00:08:07.486 10334.523 - 10384.935: 95.1122% ( 11) 00:08:07.486 10384.935 - 10435.348: 95.2553% ( 25) 00:08:07.486 10435.348 - 10485.760: 95.3526% ( 17) 00:08:07.486 10485.760 - 10536.172: 95.4212% ( 12) 00:08:07.486 10536.172 - 10586.585: 95.4842% ( 11) 00:08:07.486 10586.585 - 10636.997: 95.5472% ( 11) 00:08:07.487 10636.997 - 10687.409: 95.6330% ( 15) 00:08:07.487 10687.409 - 10737.822: 95.7017% ( 12) 00:08:07.487 10737.822 - 10788.234: 95.8276% ( 22) 00:08:07.487 10788.234 - 10838.646: 95.9421% ( 20) 00:08:07.487 10838.646 - 10889.058: 96.0279% ( 15) 00:08:07.487 10889.058 - 10939.471: 96.1367% ( 19) 00:08:07.487 10939.471 - 10989.883: 96.2340% ( 17) 00:08:07.487 10989.883 - 11040.295: 96.3370% ( 18) 00:08:07.487 11040.295 - 11090.708: 96.4343% ( 17) 00:08:07.487 11090.708 - 11141.120: 96.5259% ( 16) 00:08:07.487 11141.120 - 11191.532: 96.6117% ( 15) 00:08:07.487 11191.532 - 11241.945: 96.6861% ( 13) 00:08:07.487 11241.945 - 11292.357: 96.7605% ( 13) 00:08:07.487 11292.357 - 11342.769: 96.8178% ( 10) 00:08:07.487 11342.769 - 11393.182: 96.8864% ( 12) 00:08:07.487 11393.182 - 11443.594: 96.9380% ( 9) 00:08:07.487 11443.594 - 11494.006: 96.9952% ( 10) 00:08:07.487 11494.006 - 11544.418: 97.0810% ( 15) 00:08:07.487 11544.418 - 11594.831: 97.1440% ( 11) 00:08:07.487 11594.831 - 11645.243: 97.1898% ( 8) 00:08:07.487 11645.243 - 11695.655: 97.2184% ( 5) 00:08:07.487 11695.655 - 11746.068: 97.2642% ( 8) 00:08:07.487 11746.068 - 11796.480: 97.2985% ( 6) 00:08:07.487 11796.480 - 11846.892: 97.3443% ( 8) 00:08:07.487 11846.892 - 11897.305: 97.3787% ( 6) 00:08:07.487 11897.305 - 11947.717: 97.4130% ( 6) 00:08:07.487 11947.717 - 11998.129: 97.4473% ( 6) 00:08:07.487 11998.129 - 12048.542: 97.4874% ( 7) 00:08:07.487 12048.542 - 12098.954: 97.5275% ( 7) 00:08:07.487 12098.954 - 12149.366: 97.5675% ( 7) 00:08:07.487 12149.366 - 12199.778: 97.5962% ( 5) 00:08:07.487 12199.778 - 12250.191: 97.6133% ( 3) 00:08:07.487 12250.191 - 12300.603: 97.6362% ( 4) 00:08:07.487 12300.603 - 12351.015: 97.6648% ( 5) 00:08:07.487 12351.015 - 12401.428: 97.6763% ( 2) 00:08:07.487 12401.428 - 12451.840: 97.6992% ( 4) 00:08:07.487 12451.840 - 12502.252: 97.7335% ( 6) 00:08:07.487 12502.252 - 12552.665: 97.7621% ( 5) 00:08:07.487 12552.665 - 12603.077: 97.7736% ( 2) 00:08:07.487 12603.077 - 12653.489: 97.8194% ( 8) 00:08:07.487 12653.489 - 12703.902: 97.8537% ( 6) 00:08:07.487 12703.902 - 12754.314: 97.9109% ( 10) 00:08:07.487 12754.314 - 12804.726: 97.9739% ( 11) 00:08:07.487 12804.726 - 12855.138: 98.0311% ( 10) 00:08:07.487 12855.138 - 12905.551: 98.0998% ( 12) 00:08:07.487 12905.551 - 13006.375: 98.2143% ( 20) 00:08:07.487 13006.375 - 13107.200: 98.3173% ( 18) 00:08:07.487 13107.200 - 13208.025: 98.4146% ( 17) 00:08:07.487 13208.025 - 13308.849: 98.5062% ( 16) 00:08:07.487 13308.849 - 13409.674: 98.6092% ( 18) 00:08:07.487 13409.674 - 13510.498: 98.7065% ( 17) 00:08:07.487 13510.498 - 13611.323: 98.8038% ( 17) 00:08:07.487 13611.323 - 13712.148: 98.8324% ( 5) 00:08:07.487 13712.148 - 13812.972: 98.8496% ( 3) 00:08:07.487 13812.972 - 13913.797: 98.8725% ( 4) 00:08:07.487 13913.797 - 14014.622: 98.9011% ( 5) 00:08:07.487 14720.394 - 14821.218: 98.9583% ( 10) 00:08:07.487 14821.218 - 14922.043: 98.9927% ( 6) 00:08:07.487 14922.043 - 15022.868: 99.0327% ( 7) 00:08:07.487 15022.868 - 15123.692: 99.0785% ( 8) 00:08:07.487 15123.692 - 15224.517: 99.1243% ( 8) 00:08:07.487 15224.517 - 15325.342: 99.1644% ( 7) 00:08:07.487 15325.342 - 15426.166: 99.2102% ( 8) 00:08:07.487 15426.166 - 15526.991: 99.2502% ( 7) 00:08:07.487 15526.991 - 15627.815: 99.2674% ( 3) 00:08:07.487 21576.468 - 21677.292: 99.2903% ( 4) 00:08:07.487 21677.292 - 21778.117: 99.3075% ( 3) 00:08:07.487 21778.117 - 21878.942: 99.3304% ( 4) 00:08:07.487 21878.942 - 21979.766: 99.3533% ( 4) 00:08:07.487 21979.766 - 22080.591: 99.3761% ( 4) 00:08:07.487 22080.591 - 22181.415: 99.3933% ( 3) 00:08:07.487 22181.415 - 22282.240: 99.4105% ( 3) 00:08:07.487 22282.240 - 22383.065: 99.4334% ( 4) 00:08:07.487 22383.065 - 22483.889: 99.4563% ( 4) 00:08:07.487 22483.889 - 22584.714: 99.4792% ( 4) 00:08:07.487 22584.714 - 22685.538: 99.4963% ( 3) 00:08:07.487 22685.538 - 22786.363: 99.5192% ( 4) 00:08:07.487 22786.363 - 22887.188: 99.5421% ( 4) 00:08:07.487 22887.188 - 22988.012: 99.5650% ( 4) 00:08:07.487 22988.012 - 23088.837: 99.5879% ( 4) 00:08:07.487 23088.837 - 23189.662: 99.6051% ( 3) 00:08:07.487 23189.662 - 23290.486: 99.6280% ( 4) 00:08:07.487 23290.486 - 23391.311: 99.6337% ( 1) 00:08:07.487 27020.997 - 27222.646: 99.6451% ( 2) 00:08:07.487 27222.646 - 27424.295: 99.6909% ( 8) 00:08:07.487 27424.295 - 27625.945: 99.7310% ( 7) 00:08:07.487 27625.945 - 27827.594: 99.7711% ( 7) 00:08:07.487 27827.594 - 28029.243: 99.8168% ( 8) 00:08:07.487 28029.243 - 28230.892: 99.8569% ( 7) 00:08:07.487 28230.892 - 28432.542: 99.8970% ( 7) 00:08:07.487 28432.542 - 28634.191: 99.9428% ( 8) 00:08:07.487 28634.191 - 28835.840: 99.9828% ( 7) 00:08:07.487 28835.840 - 29037.489: 100.0000% ( 3) 00:08:07.487 00:08:07.487 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:07.487 ============================================================================== 00:08:07.487 Range in us Cumulative IO count 00:08:07.487 5847.828 - 5873.034: 0.0114% ( 2) 00:08:07.487 5873.034 - 5898.240: 0.0343% ( 4) 00:08:07.487 5898.240 - 5923.446: 0.0572% ( 4) 00:08:07.487 5923.446 - 5948.652: 0.0744% ( 3) 00:08:07.487 5948.652 - 5973.858: 0.1030% ( 5) 00:08:07.487 5973.858 - 5999.065: 0.1545% ( 9) 00:08:07.487 5999.065 - 6024.271: 0.1889% ( 6) 00:08:07.487 6024.271 - 6049.477: 0.2118% ( 4) 00:08:07.487 6074.683 - 6099.889: 0.2804% ( 12) 00:08:07.487 6099.889 - 6125.095: 0.3262% ( 8) 00:08:07.487 6125.095 - 6150.302: 0.3720% ( 8) 00:08:07.487 6150.302 - 6175.508: 0.4178% ( 8) 00:08:07.487 6175.508 - 6200.714: 0.5437% ( 22) 00:08:07.487 6200.714 - 6225.920: 0.8070% ( 46) 00:08:07.487 6225.920 - 6251.126: 1.2935% ( 85) 00:08:07.487 6251.126 - 6276.332: 2.1120% ( 143) 00:08:07.487 6276.332 - 6301.538: 3.1765% ( 186) 00:08:07.487 6301.538 - 6326.745: 4.1896% ( 177) 00:08:07.487 6326.745 - 6351.951: 5.4087% ( 213) 00:08:07.487 6351.951 - 6377.157: 7.0112% ( 280) 00:08:07.487 6377.157 - 6402.363: 8.8599% ( 323) 00:08:07.487 6402.363 - 6427.569: 10.9032% ( 357) 00:08:07.487 6427.569 - 6452.775: 13.1181% ( 387) 00:08:07.487 6452.775 - 6503.188: 17.5137% ( 768) 00:08:07.487 6503.188 - 6553.600: 21.8407% ( 756) 00:08:07.487 6553.600 - 6604.012: 26.4480% ( 805) 00:08:07.487 6604.012 - 6654.425: 30.8837% ( 775) 00:08:07.487 6654.425 - 6704.837: 35.4396% ( 796) 00:08:07.487 6704.837 - 6755.249: 39.9439% ( 787) 00:08:07.487 6755.249 - 6805.662: 44.6085% ( 815) 00:08:07.487 6805.662 - 6856.074: 49.2216% ( 806) 00:08:07.487 6856.074 - 6906.486: 53.9721% ( 830) 00:08:07.487 6906.486 - 6956.898: 58.7225% ( 830) 00:08:07.487 6956.898 - 7007.311: 63.3986% ( 817) 00:08:07.487 7007.311 - 7057.723: 67.8972% ( 786) 00:08:07.487 7057.723 - 7108.135: 71.8807% ( 696) 00:08:07.487 7108.135 - 7158.548: 75.2804% ( 594) 00:08:07.487 7158.548 - 7208.960: 77.8903% ( 456) 00:08:07.487 7208.960 - 7259.372: 79.6646% ( 310) 00:08:07.487 7259.372 - 7309.785: 80.8093% ( 200) 00:08:07.487 7309.785 - 7360.197: 81.7022% ( 156) 00:08:07.487 7360.197 - 7410.609: 82.4176% ( 125) 00:08:07.487 7410.609 - 7461.022: 82.9899% ( 100) 00:08:07.487 7461.022 - 7511.434: 83.4936% ( 88) 00:08:07.487 7511.434 - 7561.846: 84.0430% ( 96) 00:08:07.487 7561.846 - 7612.258: 84.5467% ( 88) 00:08:07.487 7612.258 - 7662.671: 84.9130% ( 64) 00:08:07.487 7662.671 - 7713.083: 85.3136% ( 70) 00:08:07.487 7713.083 - 7763.495: 85.6456% ( 58) 00:08:07.487 7763.495 - 7813.908: 85.9776% ( 58) 00:08:07.487 7813.908 - 7864.320: 86.3038% ( 57) 00:08:07.487 7864.320 - 7914.732: 86.6014% ( 52) 00:08:07.487 7914.732 - 7965.145: 86.9334% ( 58) 00:08:07.487 7965.145 - 8015.557: 87.2596% ( 57) 00:08:07.487 8015.557 - 8065.969: 87.5229% ( 46) 00:08:07.487 8065.969 - 8116.382: 87.8148% ( 51) 00:08:07.487 8116.382 - 8166.794: 88.0895% ( 48) 00:08:07.487 8166.794 - 8217.206: 88.3700% ( 49) 00:08:07.487 8217.206 - 8267.618: 88.6046% ( 41) 00:08:07.487 8267.618 - 8318.031: 88.8278% ( 39) 00:08:07.487 8318.031 - 8368.443: 89.0167% ( 33) 00:08:07.487 8368.443 - 8418.855: 89.2857% ( 47) 00:08:07.487 8418.855 - 8469.268: 89.5433% ( 45) 00:08:07.487 8469.268 - 8519.680: 89.7722% ( 40) 00:08:07.487 8519.680 - 8570.092: 89.9725% ( 35) 00:08:07.487 8570.092 - 8620.505: 90.1843% ( 37) 00:08:07.487 8620.505 - 8670.917: 90.3674% ( 32) 00:08:07.487 8670.917 - 8721.329: 90.5506% ( 32) 00:08:07.487 8721.329 - 8771.742: 90.7337% ( 32) 00:08:07.487 8771.742 - 8822.154: 90.9169% ( 32) 00:08:07.487 8822.154 - 8872.566: 91.0943% ( 31) 00:08:07.487 8872.566 - 8922.978: 91.2775% ( 32) 00:08:07.487 8922.978 - 8973.391: 91.4549% ( 31) 00:08:07.487 8973.391 - 9023.803: 91.6266% ( 30) 00:08:07.487 9023.803 - 9074.215: 91.7869% ( 28) 00:08:07.487 9074.215 - 9124.628: 91.9242% ( 24) 00:08:07.487 9124.628 - 9175.040: 92.0788% ( 27) 00:08:07.487 9175.040 - 9225.452: 92.2333% ( 27) 00:08:07.487 9225.452 - 9275.865: 92.3764% ( 25) 00:08:07.487 9275.865 - 9326.277: 92.5309% ( 27) 00:08:07.487 9326.277 - 9376.689: 92.7141% ( 32) 00:08:07.487 9376.689 - 9427.102: 92.8858% ( 30) 00:08:07.487 9427.102 - 9477.514: 93.0689% ( 32) 00:08:07.487 9477.514 - 9527.926: 93.2406% ( 30) 00:08:07.487 9527.926 - 9578.338: 93.5039% ( 46) 00:08:07.487 9578.338 - 9628.751: 93.6756% ( 30) 00:08:07.487 9628.751 - 9679.163: 93.8187% ( 25) 00:08:07.487 9679.163 - 9729.575: 93.9389% ( 21) 00:08:07.487 9729.575 - 9779.988: 94.0705% ( 23) 00:08:07.488 9779.988 - 9830.400: 94.2022% ( 23) 00:08:07.488 9830.400 - 9880.812: 94.3510% ( 26) 00:08:07.488 9880.812 - 9931.225: 94.4712% ( 21) 00:08:07.488 9931.225 - 9981.637: 94.5913% ( 21) 00:08:07.488 9981.637 - 10032.049: 94.7230% ( 23) 00:08:07.488 10032.049 - 10082.462: 94.8546% ( 23) 00:08:07.488 10082.462 - 10132.874: 94.9748% ( 21) 00:08:07.488 10132.874 - 10183.286: 95.1122% ( 24) 00:08:07.488 10183.286 - 10233.698: 95.2495% ( 24) 00:08:07.488 10233.698 - 10284.111: 95.3583% ( 19) 00:08:07.488 10284.111 - 10334.523: 95.4384% ( 14) 00:08:07.488 10334.523 - 10384.935: 95.5243% ( 15) 00:08:07.488 10384.935 - 10435.348: 95.6902% ( 29) 00:08:07.488 10435.348 - 10485.760: 95.7933% ( 18) 00:08:07.488 10485.760 - 10536.172: 95.8677% ( 13) 00:08:07.488 10536.172 - 10586.585: 95.9364% ( 12) 00:08:07.488 10586.585 - 10636.997: 96.0165% ( 14) 00:08:07.488 10636.997 - 10687.409: 96.0737% ( 10) 00:08:07.488 10687.409 - 10737.822: 96.1538% ( 14) 00:08:07.488 10737.822 - 10788.234: 96.2168% ( 11) 00:08:07.488 10788.234 - 10838.646: 96.2855% ( 12) 00:08:07.488 10838.646 - 10889.058: 96.3599% ( 13) 00:08:07.488 10889.058 - 10939.471: 96.4228% ( 11) 00:08:07.488 10939.471 - 10989.883: 96.4686% ( 8) 00:08:07.488 10989.883 - 11040.295: 96.5259% ( 10) 00:08:07.488 11040.295 - 11090.708: 96.5717% ( 8) 00:08:07.488 11090.708 - 11141.120: 96.6346% ( 11) 00:08:07.488 11141.120 - 11191.532: 96.6918% ( 10) 00:08:07.488 11191.532 - 11241.945: 96.7262% ( 6) 00:08:07.488 11241.945 - 11292.357: 96.7605% ( 6) 00:08:07.488 11292.357 - 11342.769: 96.7891% ( 5) 00:08:07.488 11342.769 - 11393.182: 96.8120% ( 4) 00:08:07.488 11393.182 - 11443.594: 96.8349% ( 4) 00:08:07.488 11443.594 - 11494.006: 96.8578% ( 4) 00:08:07.488 11494.006 - 11544.418: 96.8750% ( 3) 00:08:07.488 11544.418 - 11594.831: 96.8979% ( 4) 00:08:07.488 11594.831 - 11645.243: 96.9551% ( 10) 00:08:07.488 11645.243 - 11695.655: 96.9952% ( 7) 00:08:07.488 11695.655 - 11746.068: 97.0353% ( 7) 00:08:07.488 11746.068 - 11796.480: 97.0868% ( 9) 00:08:07.488 11796.480 - 11846.892: 97.1440% ( 10) 00:08:07.488 11846.892 - 11897.305: 97.2070% ( 11) 00:08:07.488 11897.305 - 11947.717: 97.2871% ( 14) 00:08:07.488 11947.717 - 11998.129: 97.3787% ( 16) 00:08:07.488 11998.129 - 12048.542: 97.4416% ( 11) 00:08:07.488 12048.542 - 12098.954: 97.4874% ( 8) 00:08:07.488 12098.954 - 12149.366: 97.5275% ( 7) 00:08:07.488 12149.366 - 12199.778: 97.5790% ( 9) 00:08:07.488 12199.778 - 12250.191: 97.6362% ( 10) 00:08:07.488 12250.191 - 12300.603: 97.6935% ( 10) 00:08:07.488 12300.603 - 12351.015: 97.7621% ( 12) 00:08:07.488 12351.015 - 12401.428: 97.8709% ( 19) 00:08:07.488 12401.428 - 12451.840: 97.9338% ( 11) 00:08:07.488 12451.840 - 12502.252: 97.9911% ( 10) 00:08:07.488 12502.252 - 12552.665: 98.0254% ( 6) 00:08:07.488 12552.665 - 12603.077: 98.0598% ( 6) 00:08:07.488 12603.077 - 12653.489: 98.0998% ( 7) 00:08:07.488 12653.489 - 12703.902: 98.1399% ( 7) 00:08:07.488 12703.902 - 12754.314: 98.1971% ( 10) 00:08:07.488 12754.314 - 12804.726: 98.2658% ( 12) 00:08:07.488 12804.726 - 12855.138: 98.3116% ( 8) 00:08:07.488 12855.138 - 12905.551: 98.3631% ( 9) 00:08:07.488 12905.551 - 13006.375: 98.4547% ( 16) 00:08:07.488 13006.375 - 13107.200: 98.5348% ( 14) 00:08:07.488 13107.200 - 13208.025: 98.6149% ( 14) 00:08:07.488 13208.025 - 13308.849: 98.6722% ( 10) 00:08:07.488 13308.849 - 13409.674: 98.7351% ( 11) 00:08:07.488 13409.674 - 13510.498: 98.7981% ( 11) 00:08:07.488 13510.498 - 13611.323: 98.8610% ( 11) 00:08:07.488 13611.323 - 13712.148: 98.8839% ( 4) 00:08:07.488 13712.148 - 13812.972: 98.9011% ( 3) 00:08:07.488 14821.218 - 14922.043: 98.9297% ( 5) 00:08:07.488 14922.043 - 15022.868: 98.9698% ( 7) 00:08:07.488 15022.868 - 15123.692: 99.0156% ( 8) 00:08:07.488 15123.692 - 15224.517: 99.0614% ( 8) 00:08:07.488 15224.517 - 15325.342: 99.1014% ( 7) 00:08:07.488 15325.342 - 15426.166: 99.1472% ( 8) 00:08:07.488 15426.166 - 15526.991: 99.1873% ( 7) 00:08:07.488 15526.991 - 15627.815: 99.2331% ( 8) 00:08:07.488 15627.815 - 15728.640: 99.2674% ( 6) 00:08:07.488 20064.098 - 20164.923: 99.2903% ( 4) 00:08:07.488 20164.923 - 20265.748: 99.3075% ( 3) 00:08:07.488 20265.748 - 20366.572: 99.3304% ( 4) 00:08:07.488 20366.572 - 20467.397: 99.3533% ( 4) 00:08:07.488 20467.397 - 20568.222: 99.3704% ( 3) 00:08:07.488 20568.222 - 20669.046: 99.3876% ( 3) 00:08:07.488 20669.046 - 20769.871: 99.4105% ( 4) 00:08:07.488 20769.871 - 20870.695: 99.4277% ( 3) 00:08:07.488 20870.695 - 20971.520: 99.4505% ( 4) 00:08:07.488 20971.520 - 21072.345: 99.4677% ( 3) 00:08:07.488 21072.345 - 21173.169: 99.4906% ( 4) 00:08:07.488 21173.169 - 21273.994: 99.5078% ( 3) 00:08:07.488 21273.994 - 21374.818: 99.5250% ( 3) 00:08:07.488 21374.818 - 21475.643: 99.5478% ( 4) 00:08:07.488 21475.643 - 21576.468: 99.5707% ( 4) 00:08:07.488 21576.468 - 21677.292: 99.5879% ( 3) 00:08:07.488 21677.292 - 21778.117: 99.6108% ( 4) 00:08:07.488 21778.117 - 21878.942: 99.6280% ( 3) 00:08:07.488 21878.942 - 21979.766: 99.6337% ( 1) 00:08:07.488 25710.277 - 25811.102: 99.6566% ( 4) 00:08:07.488 25811.102 - 26012.751: 99.6967% ( 7) 00:08:07.488 26012.751 - 26214.400: 99.7367% ( 7) 00:08:07.488 26214.400 - 26416.049: 99.7768% ( 7) 00:08:07.488 26416.049 - 26617.698: 99.8168% ( 7) 00:08:07.488 26617.698 - 26819.348: 99.8569% ( 7) 00:08:07.488 26819.348 - 27020.997: 99.8970% ( 7) 00:08:07.488 27020.997 - 27222.646: 99.9428% ( 8) 00:08:07.488 27222.646 - 27424.295: 99.9771% ( 6) 00:08:07.488 27424.295 - 27625.945: 100.0000% ( 4) 00:08:07.488 00:08:07.488 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:07.488 ============================================================================== 00:08:07.488 Range in us Cumulative IO count 00:08:07.488 5898.240 - 5923.446: 0.0286% ( 5) 00:08:07.488 5923.446 - 5948.652: 0.0687% ( 7) 00:08:07.488 5948.652 - 5973.858: 0.1202% ( 9) 00:08:07.488 5973.858 - 5999.065: 0.1488% ( 5) 00:08:07.488 5999.065 - 6024.271: 0.1832% ( 6) 00:08:07.488 6024.271 - 6049.477: 0.2175% ( 6) 00:08:07.488 6049.477 - 6074.683: 0.2690% ( 9) 00:08:07.488 6074.683 - 6099.889: 0.3033% ( 6) 00:08:07.488 6099.889 - 6125.095: 0.3434% ( 7) 00:08:07.488 6125.095 - 6150.302: 0.3777% ( 6) 00:08:07.488 6150.302 - 6175.508: 0.4407% ( 11) 00:08:07.488 6175.508 - 6200.714: 0.5895% ( 26) 00:08:07.488 6200.714 - 6225.920: 0.7841% ( 34) 00:08:07.488 6225.920 - 6251.126: 1.1504% ( 64) 00:08:07.488 6251.126 - 6276.332: 1.7628% ( 107) 00:08:07.488 6276.332 - 6301.538: 2.7072% ( 165) 00:08:07.488 6301.538 - 6326.745: 3.8347% ( 197) 00:08:07.488 6326.745 - 6351.951: 5.3743% ( 269) 00:08:07.488 6351.951 - 6377.157: 6.9311% ( 272) 00:08:07.488 6377.157 - 6402.363: 8.8542% ( 336) 00:08:07.488 6402.363 - 6427.569: 10.7658% ( 334) 00:08:07.488 6427.569 - 6452.775: 12.9235% ( 377) 00:08:07.488 6452.775 - 6503.188: 17.3077% ( 766) 00:08:07.488 6503.188 - 6553.600: 22.0353% ( 826) 00:08:07.488 6553.600 - 6604.012: 26.4538% ( 772) 00:08:07.488 6604.012 - 6654.425: 30.7521% ( 751) 00:08:07.488 6654.425 - 6704.837: 35.2679% ( 789) 00:08:07.488 6704.837 - 6755.249: 39.8867% ( 807) 00:08:07.488 6755.249 - 6805.662: 44.5856% ( 821) 00:08:07.488 6805.662 - 6856.074: 49.3361% ( 830) 00:08:07.488 6856.074 - 6906.486: 54.0121% ( 817) 00:08:07.488 6906.486 - 6956.898: 58.8084% ( 838) 00:08:07.488 6956.898 - 7007.311: 63.5703% ( 832) 00:08:07.488 7007.311 - 7057.723: 68.1777% ( 805) 00:08:07.488 7057.723 - 7108.135: 72.2699% ( 715) 00:08:07.488 7108.135 - 7158.548: 75.6181% ( 585) 00:08:07.488 7158.548 - 7208.960: 78.1822% ( 448) 00:08:07.488 7208.960 - 7259.372: 79.9794% ( 314) 00:08:07.488 7259.372 - 7309.785: 81.1298% ( 201) 00:08:07.488 7309.785 - 7360.197: 81.9769% ( 148) 00:08:07.488 7360.197 - 7410.609: 82.6522% ( 118) 00:08:07.488 7410.609 - 7461.022: 83.2017% ( 96) 00:08:07.488 7461.022 - 7511.434: 83.7054% ( 88) 00:08:07.488 7511.434 - 7561.846: 84.1232% ( 73) 00:08:07.488 7561.846 - 7612.258: 84.5124% ( 68) 00:08:07.488 7612.258 - 7662.671: 84.8958% ( 67) 00:08:07.488 7662.671 - 7713.083: 85.3079% ( 72) 00:08:07.488 7713.083 - 7763.495: 85.6399% ( 58) 00:08:07.488 7763.495 - 7813.908: 86.0005% ( 63) 00:08:07.488 7813.908 - 7864.320: 86.3439% ( 60) 00:08:07.488 7864.320 - 7914.732: 86.7044% ( 63) 00:08:07.488 7914.732 - 7965.145: 87.0250% ( 56) 00:08:07.488 7965.145 - 8015.557: 87.3054% ( 49) 00:08:07.488 8015.557 - 8065.969: 87.5973% ( 51) 00:08:07.488 8065.969 - 8116.382: 87.8491% ( 44) 00:08:07.488 8116.382 - 8166.794: 88.0723% ( 39) 00:08:07.488 8166.794 - 8217.206: 88.2956% ( 39) 00:08:07.488 8217.206 - 8267.618: 88.4787% ( 32) 00:08:07.488 8267.618 - 8318.031: 88.6676% ( 33) 00:08:07.488 8318.031 - 8368.443: 88.8221% ( 27) 00:08:07.488 8368.443 - 8418.855: 88.9652% ( 25) 00:08:07.488 8418.855 - 8469.268: 89.1541% ( 33) 00:08:07.488 8469.268 - 8519.680: 89.3372% ( 32) 00:08:07.488 8519.680 - 8570.092: 89.5147% ( 31) 00:08:07.488 8570.092 - 8620.505: 89.6978% ( 32) 00:08:07.488 8620.505 - 8670.917: 89.8695% ( 30) 00:08:07.488 8670.917 - 8721.329: 90.0984% ( 40) 00:08:07.488 8721.329 - 8771.742: 90.3331% ( 41) 00:08:07.488 8771.742 - 8822.154: 90.5907% ( 45) 00:08:07.488 8822.154 - 8872.566: 90.8253% ( 41) 00:08:07.488 8872.566 - 8922.978: 91.0943% ( 47) 00:08:07.488 8922.978 - 8973.391: 91.3118% ( 38) 00:08:07.488 8973.391 - 9023.803: 91.5121% ( 35) 00:08:07.488 9023.803 - 9074.215: 91.6953% ( 32) 00:08:07.488 9074.215 - 9124.628: 91.8842% ( 33) 00:08:07.488 9124.628 - 9175.040: 92.0788% ( 34) 00:08:07.488 9175.040 - 9225.452: 92.2619% ( 32) 00:08:07.489 9225.452 - 9275.865: 92.4393% ( 31) 00:08:07.489 9275.865 - 9326.277: 92.6168% ( 31) 00:08:07.489 9326.277 - 9376.689: 92.7999% ( 32) 00:08:07.489 9376.689 - 9427.102: 92.9773% ( 31) 00:08:07.489 9427.102 - 9477.514: 93.1605% ( 32) 00:08:07.489 9477.514 - 9527.926: 93.3379% ( 31) 00:08:07.489 9527.926 - 9578.338: 93.5325% ( 34) 00:08:07.489 9578.338 - 9628.751: 93.6985% ( 29) 00:08:07.489 9628.751 - 9679.163: 93.9217% ( 39) 00:08:07.489 9679.163 - 9729.575: 94.0934% ( 30) 00:08:07.489 9729.575 - 9779.988: 94.2537% ( 28) 00:08:07.489 9779.988 - 9830.400: 94.3910% ( 24) 00:08:07.489 9830.400 - 9880.812: 94.5112% ( 21) 00:08:07.489 9880.812 - 9931.225: 94.6142% ( 18) 00:08:07.489 9931.225 - 9981.637: 94.7344% ( 21) 00:08:07.489 9981.637 - 10032.049: 94.8489% ( 20) 00:08:07.489 10032.049 - 10082.462: 94.9805% ( 23) 00:08:07.489 10082.462 - 10132.874: 95.0721% ( 16) 00:08:07.489 10132.874 - 10183.286: 95.1751% ( 18) 00:08:07.489 10183.286 - 10233.698: 95.2839% ( 19) 00:08:07.489 10233.698 - 10284.111: 95.3812% ( 17) 00:08:07.489 10284.111 - 10334.523: 95.4785% ( 17) 00:08:07.489 10334.523 - 10384.935: 95.5643% ( 15) 00:08:07.489 10384.935 - 10435.348: 95.6387% ( 13) 00:08:07.489 10435.348 - 10485.760: 95.7246% ( 15) 00:08:07.489 10485.760 - 10536.172: 95.8047% ( 14) 00:08:07.489 10536.172 - 10586.585: 95.8677% ( 11) 00:08:07.489 10586.585 - 10636.997: 95.9936% ( 22) 00:08:07.489 10636.997 - 10687.409: 96.0565% ( 11) 00:08:07.489 10687.409 - 10737.822: 96.1367% ( 14) 00:08:07.489 10737.822 - 10788.234: 96.2225% ( 15) 00:08:07.489 10788.234 - 10838.646: 96.3027% ( 14) 00:08:07.489 10838.646 - 10889.058: 96.3599% ( 10) 00:08:07.489 10889.058 - 10939.471: 96.4228% ( 11) 00:08:07.489 10939.471 - 10989.883: 96.4686% ( 8) 00:08:07.489 10989.883 - 11040.295: 96.5144% ( 8) 00:08:07.489 11040.295 - 11090.708: 96.5488% ( 6) 00:08:07.489 11090.708 - 11141.120: 96.5888% ( 7) 00:08:07.489 11141.120 - 11191.532: 96.6174% ( 5) 00:08:07.489 11191.532 - 11241.945: 96.6461% ( 5) 00:08:07.489 11241.945 - 11292.357: 96.6690% ( 4) 00:08:07.489 11292.357 - 11342.769: 96.7090% ( 7) 00:08:07.489 11342.769 - 11393.182: 96.7434% ( 6) 00:08:07.489 11393.182 - 11443.594: 96.7491% ( 1) 00:08:07.489 11443.594 - 11494.006: 96.7605% ( 2) 00:08:07.489 11494.006 - 11544.418: 96.7720% ( 2) 00:08:07.489 11544.418 - 11594.831: 96.8006% ( 5) 00:08:07.489 11594.831 - 11645.243: 96.8235% ( 4) 00:08:07.489 11645.243 - 11695.655: 96.8750% ( 9) 00:08:07.489 11695.655 - 11746.068: 96.8979% ( 4) 00:08:07.489 11746.068 - 11796.480: 96.9666% ( 12) 00:08:07.489 11796.480 - 11846.892: 97.0124% ( 8) 00:08:07.489 11846.892 - 11897.305: 97.0696% ( 10) 00:08:07.489 11897.305 - 11947.717: 97.1326% ( 11) 00:08:07.489 11947.717 - 11998.129: 97.1898% ( 10) 00:08:07.489 11998.129 - 12048.542: 97.2527% ( 11) 00:08:07.489 12048.542 - 12098.954: 97.3157% ( 11) 00:08:07.489 12098.954 - 12149.366: 97.3729% ( 10) 00:08:07.489 12149.366 - 12199.778: 97.4416% ( 12) 00:08:07.489 12199.778 - 12250.191: 97.5046% ( 11) 00:08:07.489 12250.191 - 12300.603: 97.5675% ( 11) 00:08:07.489 12300.603 - 12351.015: 97.6305% ( 11) 00:08:07.489 12351.015 - 12401.428: 97.6935% ( 11) 00:08:07.489 12401.428 - 12451.840: 97.7679% ( 13) 00:08:07.489 12451.840 - 12502.252: 97.8594% ( 16) 00:08:07.489 12502.252 - 12552.665: 97.9281% ( 12) 00:08:07.489 12552.665 - 12603.077: 98.0140% ( 15) 00:08:07.489 12603.077 - 12653.489: 98.0884% ( 13) 00:08:07.489 12653.489 - 12703.902: 98.1342% ( 8) 00:08:07.489 12703.902 - 12754.314: 98.1685% ( 6) 00:08:07.489 12754.314 - 12804.726: 98.1971% ( 5) 00:08:07.489 12804.726 - 12855.138: 98.2315% ( 6) 00:08:07.489 12855.138 - 12905.551: 98.2830% ( 9) 00:08:07.489 12905.551 - 13006.375: 98.3917% ( 19) 00:08:07.489 13006.375 - 13107.200: 98.4547% ( 11) 00:08:07.489 13107.200 - 13208.025: 98.5234% ( 12) 00:08:07.489 13208.025 - 13308.849: 98.5749% ( 9) 00:08:07.489 13308.849 - 13409.674: 98.6493% ( 13) 00:08:07.489 13409.674 - 13510.498: 98.7294% ( 14) 00:08:07.489 13510.498 - 13611.323: 98.7809% ( 9) 00:08:07.489 13611.323 - 13712.148: 98.8439% ( 11) 00:08:07.489 13712.148 - 13812.972: 98.8782% ( 6) 00:08:07.489 13812.972 - 13913.797: 98.9011% ( 4) 00:08:07.489 13913.797 - 14014.622: 98.9240% ( 4) 00:08:07.489 14014.622 - 14115.446: 98.9526% ( 5) 00:08:07.489 14115.446 - 14216.271: 98.9698% ( 3) 00:08:07.489 14216.271 - 14317.095: 98.9927% ( 4) 00:08:07.489 14317.095 - 14417.920: 99.0156% ( 4) 00:08:07.489 14417.920 - 14518.745: 99.0327% ( 3) 00:08:07.489 14518.745 - 14619.569: 99.0556% ( 4) 00:08:07.489 14619.569 - 14720.394: 99.0785% ( 4) 00:08:07.489 14720.394 - 14821.218: 99.1014% ( 4) 00:08:07.489 14821.218 - 14922.043: 99.1186% ( 3) 00:08:07.489 14922.043 - 15022.868: 99.1358% ( 3) 00:08:07.489 15022.868 - 15123.692: 99.1587% ( 4) 00:08:07.489 15123.692 - 15224.517: 99.1758% ( 3) 00:08:07.489 15224.517 - 15325.342: 99.1930% ( 3) 00:08:07.489 15325.342 - 15426.166: 99.2159% ( 4) 00:08:07.489 15426.166 - 15526.991: 99.2388% ( 4) 00:08:07.489 15526.991 - 15627.815: 99.2617% ( 4) 00:08:07.489 15627.815 - 15728.640: 99.2674% ( 1) 00:08:07.489 18450.905 - 18551.729: 99.2788% ( 2) 00:08:07.489 18551.729 - 18652.554: 99.3017% ( 4) 00:08:07.489 18652.554 - 18753.378: 99.3189% ( 3) 00:08:07.489 18753.378 - 18854.203: 99.3361% ( 3) 00:08:07.489 18854.203 - 18955.028: 99.3590% ( 4) 00:08:07.489 18955.028 - 19055.852: 99.3819% ( 4) 00:08:07.489 19055.852 - 19156.677: 99.4048% ( 4) 00:08:07.489 19156.677 - 19257.502: 99.4219% ( 3) 00:08:07.489 19257.502 - 19358.326: 99.4448% ( 4) 00:08:07.489 19358.326 - 19459.151: 99.4677% ( 4) 00:08:07.489 19459.151 - 19559.975: 99.4906% ( 4) 00:08:07.489 19559.975 - 19660.800: 99.5078% ( 3) 00:08:07.489 19660.800 - 19761.625: 99.5307% ( 4) 00:08:07.489 19761.625 - 19862.449: 99.5478% ( 3) 00:08:07.489 19862.449 - 19963.274: 99.5707% ( 4) 00:08:07.489 19963.274 - 20064.098: 99.5879% ( 3) 00:08:07.489 20064.098 - 20164.923: 99.6051% ( 3) 00:08:07.489 20164.923 - 20265.748: 99.6280% ( 4) 00:08:07.489 20265.748 - 20366.572: 99.6337% ( 1) 00:08:07.489 24097.083 - 24197.908: 99.6451% ( 2) 00:08:07.489 24197.908 - 24298.732: 99.6680% ( 4) 00:08:07.489 24298.732 - 24399.557: 99.6852% ( 3) 00:08:07.489 24399.557 - 24500.382: 99.7081% ( 4) 00:08:07.489 24500.382 - 24601.206: 99.7310% ( 4) 00:08:07.489 24601.206 - 24702.031: 99.7539% ( 4) 00:08:07.489 24702.031 - 24802.855: 99.7711% ( 3) 00:08:07.489 24802.855 - 24903.680: 99.7940% ( 4) 00:08:07.489 24903.680 - 25004.505: 99.8168% ( 4) 00:08:07.489 25004.505 - 25105.329: 99.8397% ( 4) 00:08:07.489 25105.329 - 25206.154: 99.8569% ( 3) 00:08:07.489 25206.154 - 25306.978: 99.8798% ( 4) 00:08:07.489 25306.978 - 25407.803: 99.8970% ( 3) 00:08:07.489 25407.803 - 25508.628: 99.9199% ( 4) 00:08:07.489 25508.628 - 25609.452: 99.9370% ( 3) 00:08:07.489 25609.452 - 25710.277: 99.9599% ( 4) 00:08:07.489 25710.277 - 25811.102: 99.9828% ( 4) 00:08:07.489 25811.102 - 26012.751: 100.0000% ( 3) 00:08:07.489 00:08:07.489 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:07.489 ============================================================================== 00:08:07.489 Range in us Cumulative IO count 00:08:07.489 5898.240 - 5923.446: 0.0172% ( 3) 00:08:07.489 5923.446 - 5948.652: 0.0744% ( 10) 00:08:07.489 5948.652 - 5973.858: 0.1202% ( 8) 00:08:07.489 5973.858 - 5999.065: 0.1889% ( 12) 00:08:07.489 5999.065 - 6024.271: 0.2461% ( 10) 00:08:07.489 6024.271 - 6049.477: 0.2976% ( 9) 00:08:07.489 6049.477 - 6074.683: 0.3262% ( 5) 00:08:07.489 6074.683 - 6099.889: 0.3549% ( 5) 00:08:07.489 6099.889 - 6125.095: 0.3663% ( 2) 00:08:07.489 6125.095 - 6150.302: 0.3720% ( 1) 00:08:07.489 6150.302 - 6175.508: 0.4178% ( 8) 00:08:07.489 6175.508 - 6200.714: 0.5380% ( 21) 00:08:07.489 6200.714 - 6225.920: 0.8013% ( 46) 00:08:07.489 6225.920 - 6251.126: 1.0359% ( 41) 00:08:07.489 6251.126 - 6276.332: 1.7571% ( 126) 00:08:07.489 6276.332 - 6301.538: 2.6843% ( 162) 00:08:07.489 6301.538 - 6326.745: 4.3155% ( 285) 00:08:07.489 6326.745 - 6351.951: 5.7292% ( 247) 00:08:07.489 6351.951 - 6377.157: 7.1772% ( 253) 00:08:07.489 6377.157 - 6402.363: 8.8198% ( 287) 00:08:07.490 6402.363 - 6427.569: 10.9489% ( 372) 00:08:07.490 6427.569 - 6452.775: 13.0495% ( 367) 00:08:07.490 6452.775 - 6503.188: 17.4794% ( 774) 00:08:07.490 6503.188 - 6553.600: 22.0066% ( 791) 00:08:07.490 6553.600 - 6604.012: 26.2592% ( 743) 00:08:07.490 6604.012 - 6654.425: 30.7120% ( 778) 00:08:07.490 6654.425 - 6704.837: 35.2564% ( 794) 00:08:07.490 6704.837 - 6755.249: 39.8294% ( 799) 00:08:07.490 6755.249 - 6805.662: 44.4769% ( 812) 00:08:07.490 6805.662 - 6856.074: 49.2788% ( 839) 00:08:07.490 6856.074 - 6906.486: 54.0179% ( 828) 00:08:07.490 6906.486 - 6956.898: 58.6825% ( 815) 00:08:07.490 6956.898 - 7007.311: 63.3528% ( 816) 00:08:07.490 7007.311 - 7057.723: 67.9144% ( 797) 00:08:07.490 7057.723 - 7108.135: 72.0009% ( 714) 00:08:07.490 7108.135 - 7158.548: 75.4979% ( 611) 00:08:07.490 7158.548 - 7208.960: 77.9991% ( 437) 00:08:07.490 7208.960 - 7259.372: 79.7562% ( 307) 00:08:07.490 7259.372 - 7309.785: 80.9753% ( 213) 00:08:07.490 7309.785 - 7360.197: 81.8853% ( 159) 00:08:07.490 7360.197 - 7410.609: 82.6465% ( 133) 00:08:07.490 7410.609 - 7461.022: 83.2017% ( 97) 00:08:07.490 7461.022 - 7511.434: 83.7225% ( 91) 00:08:07.490 7511.434 - 7561.846: 84.2205% ( 87) 00:08:07.490 7561.846 - 7612.258: 84.6669% ( 78) 00:08:07.490 7612.258 - 7662.671: 85.0446% ( 66) 00:08:07.490 7662.671 - 7713.083: 85.4796% ( 76) 00:08:07.490 7713.083 - 7763.495: 85.8001% ( 56) 00:08:07.490 7763.495 - 7813.908: 86.1321% ( 58) 00:08:07.490 7813.908 - 7864.320: 86.4240% ( 51) 00:08:07.490 7864.320 - 7914.732: 86.7159% ( 51) 00:08:07.490 7914.732 - 7965.145: 86.9963% ( 49) 00:08:07.490 7965.145 - 8015.557: 87.3054% ( 54) 00:08:07.490 8015.557 - 8065.969: 87.5000% ( 34) 00:08:07.490 8065.969 - 8116.382: 87.7003% ( 35) 00:08:07.490 8116.382 - 8166.794: 87.9178% ( 38) 00:08:07.490 8166.794 - 8217.206: 88.1124% ( 34) 00:08:07.490 8217.206 - 8267.618: 88.3757% ( 46) 00:08:07.490 8267.618 - 8318.031: 88.5875% ( 37) 00:08:07.490 8318.031 - 8368.443: 88.7935% ( 36) 00:08:07.490 8368.443 - 8418.855: 89.1026% ( 54) 00:08:07.490 8418.855 - 8469.268: 89.3544% ( 44) 00:08:07.490 8469.268 - 8519.680: 89.5604% ( 36) 00:08:07.490 8519.680 - 8570.092: 89.7779% ( 38) 00:08:07.490 8570.092 - 8620.505: 90.0011% ( 39) 00:08:07.490 8620.505 - 8670.917: 90.2587% ( 45) 00:08:07.490 8670.917 - 8721.329: 90.4705% ( 37) 00:08:07.490 8721.329 - 8771.742: 90.6536% ( 32) 00:08:07.490 8771.742 - 8822.154: 90.8139% ( 28) 00:08:07.490 8822.154 - 8872.566: 90.9741% ( 28) 00:08:07.490 8872.566 - 8922.978: 91.1344% ( 28) 00:08:07.490 8922.978 - 8973.391: 91.2832% ( 26) 00:08:07.490 8973.391 - 9023.803: 91.4778% ( 34) 00:08:07.490 9023.803 - 9074.215: 91.6838% ( 36) 00:08:07.490 9074.215 - 9124.628: 91.8784% ( 34) 00:08:07.490 9124.628 - 9175.040: 92.0330% ( 27) 00:08:07.490 9175.040 - 9225.452: 92.2276% ( 34) 00:08:07.490 9225.452 - 9275.865: 92.4393% ( 37) 00:08:07.490 9275.865 - 9326.277: 92.5996% ( 28) 00:08:07.490 9326.277 - 9376.689: 92.7541% ( 27) 00:08:07.490 9376.689 - 9427.102: 92.9144% ( 28) 00:08:07.490 9427.102 - 9477.514: 93.0632% ( 26) 00:08:07.490 9477.514 - 9527.926: 93.2177% ( 27) 00:08:07.490 9527.926 - 9578.338: 93.3551% ( 24) 00:08:07.490 9578.338 - 9628.751: 93.4810% ( 22) 00:08:07.490 9628.751 - 9679.163: 93.6298% ( 26) 00:08:07.490 9679.163 - 9729.575: 93.8187% ( 33) 00:08:07.490 9729.575 - 9779.988: 93.9732% ( 27) 00:08:07.490 9779.988 - 9830.400: 94.1277% ( 27) 00:08:07.490 9830.400 - 9880.812: 94.2995% ( 30) 00:08:07.490 9880.812 - 9931.225: 94.4540% ( 27) 00:08:07.490 9931.225 - 9981.637: 94.6142% ( 28) 00:08:07.490 9981.637 - 10032.049: 94.7630% ( 26) 00:08:07.490 10032.049 - 10082.462: 94.8947% ( 23) 00:08:07.490 10082.462 - 10132.874: 95.0092% ( 20) 00:08:07.490 10132.874 - 10183.286: 95.1236% ( 20) 00:08:07.490 10183.286 - 10233.698: 95.2438% ( 21) 00:08:07.490 10233.698 - 10284.111: 95.3526% ( 19) 00:08:07.490 10284.111 - 10334.523: 95.4670% ( 20) 00:08:07.490 10334.523 - 10384.935: 95.5929% ( 22) 00:08:07.490 10384.935 - 10435.348: 95.7131% ( 21) 00:08:07.490 10435.348 - 10485.760: 95.8162% ( 18) 00:08:07.490 10485.760 - 10536.172: 95.8963% ( 14) 00:08:07.490 10536.172 - 10586.585: 95.9592% ( 11) 00:08:07.490 10586.585 - 10636.997: 96.0394% ( 14) 00:08:07.490 10636.997 - 10687.409: 96.1138% ( 13) 00:08:07.490 10687.409 - 10737.822: 96.1825% ( 12) 00:08:07.490 10737.822 - 10788.234: 96.2683% ( 15) 00:08:07.490 10788.234 - 10838.646: 96.3484% ( 14) 00:08:07.490 10838.646 - 10889.058: 96.4228% ( 13) 00:08:07.490 10889.058 - 10939.471: 96.4973% ( 13) 00:08:07.490 10939.471 - 10989.883: 96.5659% ( 12) 00:08:07.490 10989.883 - 11040.295: 96.6346% ( 12) 00:08:07.490 11040.295 - 11090.708: 96.6861% ( 9) 00:08:07.490 11090.708 - 11141.120: 96.7262% ( 7) 00:08:07.490 11141.120 - 11191.532: 96.7663% ( 7) 00:08:07.490 11191.532 - 11241.945: 96.8006% ( 6) 00:08:07.490 11241.945 - 11292.357: 96.8235% ( 4) 00:08:07.490 11292.357 - 11342.769: 96.8407% ( 3) 00:08:07.490 11342.769 - 11393.182: 96.8693% ( 5) 00:08:07.490 11393.182 - 11443.594: 96.8979% ( 5) 00:08:07.490 11443.594 - 11494.006: 96.9265% ( 5) 00:08:07.490 11494.006 - 11544.418: 96.9551% ( 5) 00:08:07.490 11544.418 - 11594.831: 96.9837% ( 5) 00:08:07.490 11594.831 - 11645.243: 97.0181% ( 6) 00:08:07.490 11645.243 - 11695.655: 97.0467% ( 5) 00:08:07.490 11695.655 - 11746.068: 97.0753% ( 5) 00:08:07.490 11746.068 - 11796.480: 97.1039% ( 5) 00:08:07.490 11796.480 - 11846.892: 97.1326% ( 5) 00:08:07.490 11846.892 - 11897.305: 97.1669% ( 6) 00:08:07.490 11897.305 - 11947.717: 97.1955% ( 5) 00:08:07.490 11947.717 - 11998.129: 97.2241% ( 5) 00:08:07.490 11998.129 - 12048.542: 97.2527% ( 5) 00:08:07.490 12048.542 - 12098.954: 97.2871% ( 6) 00:08:07.490 12098.954 - 12149.366: 97.3214% ( 6) 00:08:07.490 12149.366 - 12199.778: 97.3615% ( 7) 00:08:07.490 12199.778 - 12250.191: 97.4130% ( 9) 00:08:07.490 12250.191 - 12300.603: 97.4702% ( 10) 00:08:07.490 12300.603 - 12351.015: 97.5389% ( 12) 00:08:07.490 12351.015 - 12401.428: 97.5790% ( 7) 00:08:07.490 12401.428 - 12451.840: 97.6133% ( 6) 00:08:07.490 12451.840 - 12502.252: 97.6419% ( 5) 00:08:07.490 12502.252 - 12552.665: 97.6648% ( 4) 00:08:07.490 12552.665 - 12603.077: 97.6992% ( 6) 00:08:07.490 12603.077 - 12653.489: 97.7278% ( 5) 00:08:07.490 12653.489 - 12703.902: 97.7621% ( 6) 00:08:07.490 12703.902 - 12754.314: 97.7965% ( 6) 00:08:07.490 12754.314 - 12804.726: 97.8251% ( 5) 00:08:07.490 12804.726 - 12855.138: 97.8594% ( 6) 00:08:07.490 12855.138 - 12905.551: 97.8823% ( 4) 00:08:07.490 12905.551 - 13006.375: 98.0025% ( 21) 00:08:07.490 13006.375 - 13107.200: 98.1170% ( 20) 00:08:07.490 13107.200 - 13208.025: 98.2086% ( 16) 00:08:07.490 13208.025 - 13308.849: 98.3345% ( 22) 00:08:07.490 13308.849 - 13409.674: 98.4261% ( 16) 00:08:07.490 13409.674 - 13510.498: 98.5405% ( 20) 00:08:07.490 13510.498 - 13611.323: 98.6493% ( 19) 00:08:07.490 13611.323 - 13712.148: 98.7637% ( 20) 00:08:07.490 13712.148 - 13812.972: 98.8668% ( 18) 00:08:07.490 13812.972 - 13913.797: 98.9240% ( 10) 00:08:07.490 13913.797 - 14014.622: 98.9583% ( 6) 00:08:07.490 14014.622 - 14115.446: 99.0041% ( 8) 00:08:07.490 14115.446 - 14216.271: 99.0499% ( 8) 00:08:07.490 14216.271 - 14317.095: 99.0900% ( 7) 00:08:07.490 14317.095 - 14417.920: 99.1415% ( 9) 00:08:07.490 14417.920 - 14518.745: 99.1873% ( 8) 00:08:07.490 14518.745 - 14619.569: 99.2273% ( 7) 00:08:07.490 14619.569 - 14720.394: 99.2502% ( 4) 00:08:07.490 14720.394 - 14821.218: 99.2674% ( 3) 00:08:07.490 16938.535 - 17039.360: 99.2903% ( 4) 00:08:07.490 17039.360 - 17140.185: 99.3132% ( 4) 00:08:07.490 17140.185 - 17241.009: 99.3304% ( 3) 00:08:07.490 17241.009 - 17341.834: 99.3533% ( 4) 00:08:07.490 17341.834 - 17442.658: 99.3704% ( 3) 00:08:07.490 17442.658 - 17543.483: 99.3933% ( 4) 00:08:07.490 17543.483 - 17644.308: 99.4162% ( 4) 00:08:07.490 17644.308 - 17745.132: 99.4334% ( 3) 00:08:07.490 17745.132 - 17845.957: 99.4563% ( 4) 00:08:07.490 17845.957 - 17946.782: 99.4792% ( 4) 00:08:07.490 17946.782 - 18047.606: 99.4963% ( 3) 00:08:07.490 18047.606 - 18148.431: 99.5192% ( 4) 00:08:07.490 18148.431 - 18249.255: 99.5364% ( 3) 00:08:07.490 18249.255 - 18350.080: 99.5593% ( 4) 00:08:07.490 18350.080 - 18450.905: 99.5822% ( 4) 00:08:07.490 18450.905 - 18551.729: 99.5994% ( 3) 00:08:07.490 18551.729 - 18652.554: 99.6223% ( 4) 00:08:07.490 18652.554 - 18753.378: 99.6337% ( 2) 00:08:07.490 22483.889 - 22584.714: 99.6451% ( 2) 00:08:07.490 22584.714 - 22685.538: 99.6680% ( 4) 00:08:07.490 22685.538 - 22786.363: 99.6852% ( 3) 00:08:07.490 22786.363 - 22887.188: 99.7081% ( 4) 00:08:07.490 22887.188 - 22988.012: 99.7253% ( 3) 00:08:07.490 22988.012 - 23088.837: 99.7482% ( 4) 00:08:07.490 23088.837 - 23189.662: 99.7653% ( 3) 00:08:07.490 23189.662 - 23290.486: 99.7825% ( 3) 00:08:07.490 23290.486 - 23391.311: 99.8054% ( 4) 00:08:07.490 23391.311 - 23492.135: 99.8226% ( 3) 00:08:07.490 23492.135 - 23592.960: 99.8455% ( 4) 00:08:07.490 23592.960 - 23693.785: 99.8684% ( 4) 00:08:07.490 23693.785 - 23794.609: 99.8855% ( 3) 00:08:07.490 23794.609 - 23895.434: 99.9084% ( 4) 00:08:07.490 23895.434 - 23996.258: 99.9313% ( 4) 00:08:07.490 23996.258 - 24097.083: 99.9542% ( 4) 00:08:07.490 24097.083 - 24197.908: 99.9714% ( 3) 00:08:07.490 24197.908 - 24298.732: 99.9943% ( 4) 00:08:07.490 24298.732 - 24399.557: 100.0000% ( 1) 00:08:07.490 00:08:07.490 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:07.490 ============================================================================== 00:08:07.490 Range in us Cumulative IO count 00:08:07.490 5873.034 - 5898.240: 0.0114% ( 2) 00:08:07.491 5898.240 - 5923.446: 0.0401% ( 5) 00:08:07.491 5923.446 - 5948.652: 0.1087% ( 12) 00:08:07.491 5948.652 - 5973.858: 0.1660% ( 10) 00:08:07.491 5973.858 - 5999.065: 0.2060% ( 7) 00:08:07.491 5999.065 - 6024.271: 0.2175% ( 2) 00:08:07.491 6024.271 - 6049.477: 0.2347% ( 3) 00:08:07.491 6049.477 - 6074.683: 0.2576% ( 4) 00:08:07.491 6074.683 - 6099.889: 0.2862% ( 5) 00:08:07.491 6099.889 - 6125.095: 0.3205% ( 6) 00:08:07.491 6125.095 - 6150.302: 0.3549% ( 6) 00:08:07.491 6150.302 - 6175.508: 0.4178% ( 11) 00:08:07.491 6175.508 - 6200.714: 0.5552% ( 24) 00:08:07.491 6200.714 - 6225.920: 0.7841% ( 40) 00:08:07.491 6225.920 - 6251.126: 1.2477% ( 81) 00:08:07.491 6251.126 - 6276.332: 1.8029% ( 97) 00:08:07.491 6276.332 - 6301.538: 2.8159% ( 177) 00:08:07.491 6301.538 - 6326.745: 3.9950% ( 206) 00:08:07.491 6326.745 - 6351.951: 5.5060% ( 264) 00:08:07.491 6351.951 - 6377.157: 7.4634% ( 342) 00:08:07.491 6377.157 - 6402.363: 9.3521% ( 330) 00:08:07.491 6402.363 - 6427.569: 11.2351% ( 329) 00:08:07.491 6427.569 - 6452.775: 13.3127% ( 363) 00:08:07.491 6452.775 - 6503.188: 17.7255% ( 771) 00:08:07.491 6503.188 - 6553.600: 22.1726% ( 777) 00:08:07.491 6553.600 - 6604.012: 26.5224% ( 760) 00:08:07.491 6604.012 - 6654.425: 30.9924% ( 781) 00:08:07.491 6654.425 - 6704.837: 35.4853% ( 785) 00:08:07.491 6704.837 - 6755.249: 40.0240% ( 793) 00:08:07.491 6755.249 - 6805.662: 44.7402% ( 824) 00:08:07.491 6805.662 - 6856.074: 49.4792% ( 828) 00:08:07.491 6856.074 - 6906.486: 54.1896% ( 823) 00:08:07.491 6906.486 - 6956.898: 58.8198% ( 809) 00:08:07.491 6956.898 - 7007.311: 63.6733% ( 848) 00:08:07.491 7007.311 - 7057.723: 68.2692% ( 803) 00:08:07.491 7057.723 - 7108.135: 72.3100% ( 706) 00:08:07.491 7108.135 - 7158.548: 75.7727% ( 605) 00:08:07.491 7158.548 - 7208.960: 78.3940% ( 458) 00:08:07.491 7208.960 - 7259.372: 80.1053% ( 299) 00:08:07.491 7259.372 - 7309.785: 81.3301% ( 214) 00:08:07.491 7309.785 - 7360.197: 82.1886% ( 150) 00:08:07.491 7360.197 - 7410.609: 82.9441% ( 132) 00:08:07.491 7410.609 - 7461.022: 83.4936% ( 96) 00:08:07.491 7461.022 - 7511.434: 84.0430% ( 96) 00:08:07.491 7511.434 - 7561.846: 84.5467% ( 88) 00:08:07.491 7561.846 - 7612.258: 84.9245% ( 66) 00:08:07.491 7612.258 - 7662.671: 85.2793% ( 62) 00:08:07.491 7662.671 - 7713.083: 85.6914% ( 72) 00:08:07.491 7713.083 - 7763.495: 86.0291% ( 59) 00:08:07.491 7763.495 - 7813.908: 86.4125% ( 67) 00:08:07.491 7813.908 - 7864.320: 86.7617% ( 61) 00:08:07.491 7864.320 - 7914.732: 87.0765% ( 55) 00:08:07.491 7914.732 - 7965.145: 87.3455% ( 47) 00:08:07.491 7965.145 - 8015.557: 87.6087% ( 46) 00:08:07.491 8015.557 - 8065.969: 87.8549% ( 43) 00:08:07.491 8065.969 - 8116.382: 88.0552% ( 35) 00:08:07.491 8116.382 - 8166.794: 88.2784% ( 39) 00:08:07.491 8166.794 - 8217.206: 88.4902% ( 37) 00:08:07.491 8217.206 - 8267.618: 88.6848% ( 34) 00:08:07.491 8267.618 - 8318.031: 88.8908% ( 36) 00:08:07.491 8318.031 - 8368.443: 89.0854% ( 34) 00:08:07.491 8368.443 - 8418.855: 89.2685% ( 32) 00:08:07.491 8418.855 - 8469.268: 89.4860% ( 38) 00:08:07.491 8469.268 - 8519.680: 89.6864% ( 35) 00:08:07.491 8519.680 - 8570.092: 89.8752% ( 33) 00:08:07.491 8570.092 - 8620.505: 90.0412% ( 29) 00:08:07.491 8620.505 - 8670.917: 90.2358% ( 34) 00:08:07.491 8670.917 - 8721.329: 90.4361% ( 35) 00:08:07.491 8721.329 - 8771.742: 90.6364% ( 35) 00:08:07.491 8771.742 - 8822.154: 90.8253% ( 33) 00:08:07.491 8822.154 - 8872.566: 90.9913% ( 29) 00:08:07.491 8872.566 - 8922.978: 91.1573% ( 29) 00:08:07.491 8922.978 - 8973.391: 91.3061% ( 26) 00:08:07.491 8973.391 - 9023.803: 91.4721% ( 29) 00:08:07.491 9023.803 - 9074.215: 91.6152% ( 25) 00:08:07.491 9074.215 - 9124.628: 91.7640% ( 26) 00:08:07.491 9124.628 - 9175.040: 91.9414% ( 31) 00:08:07.491 9175.040 - 9225.452: 92.1245% ( 32) 00:08:07.491 9225.452 - 9275.865: 92.2447% ( 21) 00:08:07.491 9275.865 - 9326.277: 92.3420% ( 17) 00:08:07.491 9326.277 - 9376.689: 92.4966% ( 27) 00:08:07.491 9376.689 - 9427.102: 92.6454% ( 26) 00:08:07.491 9427.102 - 9477.514: 92.7770% ( 23) 00:08:07.491 9477.514 - 9527.926: 92.8858% ( 19) 00:08:07.491 9527.926 - 9578.338: 92.9773% ( 16) 00:08:07.491 9578.338 - 9628.751: 93.0861% ( 19) 00:08:07.491 9628.751 - 9679.163: 93.1948% ( 19) 00:08:07.491 9679.163 - 9729.575: 93.3322% ( 24) 00:08:07.491 9729.575 - 9779.988: 93.5840% ( 44) 00:08:07.491 9779.988 - 9830.400: 93.7557% ( 30) 00:08:07.491 9830.400 - 9880.812: 93.8931% ( 24) 00:08:07.491 9880.812 - 9931.225: 94.0476% ( 27) 00:08:07.491 9931.225 - 9981.637: 94.1964% ( 26) 00:08:07.491 9981.637 - 10032.049: 94.3624% ( 29) 00:08:07.491 10032.049 - 10082.462: 94.5112% ( 26) 00:08:07.491 10082.462 - 10132.874: 94.6600% ( 26) 00:08:07.491 10132.874 - 10183.286: 94.8031% ( 25) 00:08:07.491 10183.286 - 10233.698: 94.9462% ( 25) 00:08:07.491 10233.698 - 10284.111: 95.0549% ( 19) 00:08:07.491 10284.111 - 10334.523: 95.1923% ( 24) 00:08:07.491 10334.523 - 10384.935: 95.3068% ( 20) 00:08:07.491 10384.935 - 10435.348: 95.4384% ( 23) 00:08:07.491 10435.348 - 10485.760: 95.5586% ( 21) 00:08:07.491 10485.760 - 10536.172: 95.6731% ( 20) 00:08:07.491 10536.172 - 10586.585: 95.7589% ( 15) 00:08:07.491 10586.585 - 10636.997: 95.8505% ( 16) 00:08:07.491 10636.997 - 10687.409: 95.9592% ( 19) 00:08:07.491 10687.409 - 10737.822: 96.0508% ( 16) 00:08:07.491 10737.822 - 10788.234: 96.1481% ( 17) 00:08:07.491 10788.234 - 10838.646: 96.2454% ( 17) 00:08:07.491 10838.646 - 10889.058: 96.3542% ( 19) 00:08:07.491 10889.058 - 10939.471: 96.4572% ( 18) 00:08:07.491 10939.471 - 10989.883: 96.5602% ( 18) 00:08:07.491 10989.883 - 11040.295: 96.6632% ( 18) 00:08:07.491 11040.295 - 11090.708: 96.7491% ( 15) 00:08:07.491 11090.708 - 11141.120: 96.8235% ( 13) 00:08:07.491 11141.120 - 11191.532: 96.8979% ( 13) 00:08:07.491 11191.532 - 11241.945: 96.9666% ( 12) 00:08:07.491 11241.945 - 11292.357: 97.0353% ( 12) 00:08:07.491 11292.357 - 11342.769: 97.0982% ( 11) 00:08:07.491 11342.769 - 11393.182: 97.1669% ( 12) 00:08:07.491 11393.182 - 11443.594: 97.2184% ( 9) 00:08:07.491 11443.594 - 11494.006: 97.2585% ( 7) 00:08:07.491 11494.006 - 11544.418: 97.2871% ( 5) 00:08:07.491 11544.418 - 11594.831: 97.3100% ( 4) 00:08:07.491 11594.831 - 11645.243: 97.3500% ( 7) 00:08:07.491 11645.243 - 11695.655: 97.3729% ( 4) 00:08:07.491 11695.655 - 11746.068: 97.3901% ( 3) 00:08:07.491 11746.068 - 11796.480: 97.4073% ( 3) 00:08:07.491 11796.480 - 11846.892: 97.4245% ( 3) 00:08:07.491 11846.892 - 11897.305: 97.4359% ( 2) 00:08:07.491 12250.191 - 12300.603: 97.4416% ( 1) 00:08:07.491 12300.603 - 12351.015: 97.4645% ( 4) 00:08:07.491 12351.015 - 12401.428: 97.4702% ( 1) 00:08:07.491 12401.428 - 12451.840: 97.4874% ( 3) 00:08:07.491 12451.840 - 12502.252: 97.4931% ( 1) 00:08:07.491 12502.252 - 12552.665: 97.5046% ( 2) 00:08:07.491 12552.665 - 12603.077: 97.5160% ( 2) 00:08:07.491 12603.077 - 12653.489: 97.5275% ( 2) 00:08:07.491 12653.489 - 12703.902: 97.5389% ( 2) 00:08:07.491 12703.902 - 12754.314: 97.5504% ( 2) 00:08:07.491 12754.314 - 12804.726: 97.5790% ( 5) 00:08:07.491 12804.726 - 12855.138: 97.6076% ( 5) 00:08:07.491 12855.138 - 12905.551: 97.6419% ( 6) 00:08:07.491 12905.551 - 13006.375: 97.7392% ( 17) 00:08:07.491 13006.375 - 13107.200: 97.8594% ( 21) 00:08:07.491 13107.200 - 13208.025: 97.9796% ( 21) 00:08:07.491 13208.025 - 13308.849: 98.1227% ( 25) 00:08:07.491 13308.849 - 13409.674: 98.2200% ( 17) 00:08:07.491 13409.674 - 13510.498: 98.3459% ( 22) 00:08:07.491 13510.498 - 13611.323: 98.4661% ( 21) 00:08:07.491 13611.323 - 13712.148: 98.5691% ( 18) 00:08:07.491 13712.148 - 13812.972: 98.6493% ( 14) 00:08:07.491 13812.972 - 13913.797: 98.7294% ( 14) 00:08:07.491 13913.797 - 14014.622: 98.7981% ( 12) 00:08:07.491 14014.622 - 14115.446: 98.8439% ( 8) 00:08:07.491 14115.446 - 14216.271: 98.8782% ( 6) 00:08:07.491 14216.271 - 14317.095: 98.9240% ( 8) 00:08:07.491 14317.095 - 14417.920: 98.9698% ( 8) 00:08:07.491 14417.920 - 14518.745: 99.0156% ( 8) 00:08:07.491 14518.745 - 14619.569: 99.0614% ( 8) 00:08:07.491 14619.569 - 14720.394: 99.1186% ( 10) 00:08:07.491 14720.394 - 14821.218: 99.1587% ( 7) 00:08:07.491 14821.218 - 14922.043: 99.2044% ( 8) 00:08:07.491 14922.043 - 15022.868: 99.2502% ( 8) 00:08:07.491 15022.868 - 15123.692: 99.2674% ( 3) 00:08:07.491 15426.166 - 15526.991: 99.2731% ( 1) 00:08:07.491 15526.991 - 15627.815: 99.3017% ( 5) 00:08:07.491 15627.815 - 15728.640: 99.3418% ( 7) 00:08:07.491 15728.640 - 15829.465: 99.3647% ( 4) 00:08:07.491 15829.465 - 15930.289: 99.3933% ( 5) 00:08:07.491 15930.289 - 16031.114: 99.4048% ( 2) 00:08:07.491 16031.114 - 16131.938: 99.4219% ( 3) 00:08:07.491 16131.938 - 16232.763: 99.4391% ( 3) 00:08:07.491 16232.763 - 16333.588: 99.4563% ( 3) 00:08:07.491 16333.588 - 16434.412: 99.4792% ( 4) 00:08:07.491 16434.412 - 16535.237: 99.4963% ( 3) 00:08:07.491 16535.237 - 16636.062: 99.5192% ( 4) 00:08:07.491 16636.062 - 16736.886: 99.5421% ( 4) 00:08:07.491 16736.886 - 16837.711: 99.5593% ( 3) 00:08:07.491 16837.711 - 16938.535: 99.5822% ( 4) 00:08:07.491 16938.535 - 17039.360: 99.6051% ( 4) 00:08:07.491 17039.360 - 17140.185: 99.6280% ( 4) 00:08:07.491 17140.185 - 17241.009: 99.6337% ( 1) 00:08:07.491 19862.449 - 19963.274: 99.6394% ( 1) 00:08:07.491 20971.520 - 21072.345: 99.6623% ( 4) 00:08:07.491 21072.345 - 21173.169: 99.6795% ( 3) 00:08:07.491 21173.169 - 21273.994: 99.7024% ( 4) 00:08:07.492 21273.994 - 21374.818: 99.7196% ( 3) 00:08:07.492 21374.818 - 21475.643: 99.7424% ( 4) 00:08:07.492 21475.643 - 21576.468: 99.7653% ( 4) 00:08:07.492 21576.468 - 21677.292: 99.7768% ( 2) 00:08:07.492 21677.292 - 21778.117: 99.7997% ( 4) 00:08:07.492 21778.117 - 21878.942: 99.8168% ( 3) 00:08:07.492 21878.942 - 21979.766: 99.8397% ( 4) 00:08:07.492 21979.766 - 22080.591: 99.8569% ( 3) 00:08:07.492 22080.591 - 22181.415: 99.8798% ( 4) 00:08:07.492 22181.415 - 22282.240: 99.9027% ( 4) 00:08:07.492 22282.240 - 22383.065: 99.9199% ( 3) 00:08:07.492 22383.065 - 22483.889: 99.9428% ( 4) 00:08:07.492 22483.889 - 22584.714: 99.9657% ( 4) 00:08:07.492 22584.714 - 22685.538: 99.9886% ( 4) 00:08:07.492 22685.538 - 22786.363: 100.0000% ( 2) 00:08:07.492 00:08:07.492 03:34:59 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:08:08.866 Initializing NVMe Controllers 00:08:08.866 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:08.866 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:08.866 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:08.866 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:08.866 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:08.866 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:08.866 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:08.866 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:08.866 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:08.866 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:08.866 Initialization complete. Launching workers. 00:08:08.866 ======================================================== 00:08:08.866 Latency(us) 00:08:08.866 Device Information : IOPS MiB/s Average min max 00:08:08.866 PCIE (0000:00:10.0) NSID 1 from core 0: 16933.57 198.44 7572.00 5518.06 34260.71 00:08:08.866 PCIE (0000:00:11.0) NSID 1 from core 0: 16933.57 198.44 7560.48 5542.69 32366.12 00:08:08.866 PCIE (0000:00:13.0) NSID 1 from core 0: 16933.57 198.44 7548.59 5540.04 30727.86 00:08:08.867 PCIE (0000:00:12.0) NSID 1 from core 0: 16933.57 198.44 7536.84 5487.33 29011.09 00:08:08.867 PCIE (0000:00:12.0) NSID 2 from core 0: 16933.57 198.44 7525.21 5541.05 27239.20 00:08:08.867 PCIE (0000:00:12.0) NSID 3 from core 0: 16997.47 199.19 7485.32 5525.39 22119.79 00:08:08.867 ======================================================== 00:08:08.867 Total : 101665.30 1191.39 7538.04 5487.33 34260.71 00:08:08.867 00:08:08.867 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:08.867 ================================================================================= 00:08:08.867 1.00000% : 5873.034us 00:08:08.867 10.00000% : 6276.332us 00:08:08.867 25.00000% : 6553.600us 00:08:08.867 50.00000% : 6956.898us 00:08:08.867 75.00000% : 8065.969us 00:08:08.867 90.00000% : 9175.040us 00:08:08.867 95.00000% : 10334.523us 00:08:08.867 98.00000% : 11897.305us 00:08:08.867 99.00000% : 13611.323us 00:08:08.867 99.50000% : 29037.489us 00:08:08.867 99.90000% : 33877.071us 00:08:08.867 99.99000% : 34280.369us 00:08:08.867 99.99900% : 34280.369us 00:08:08.867 99.99990% : 34280.369us 00:08:08.867 99.99999% : 34280.369us 00:08:08.867 00:08:08.867 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:08.867 ================================================================================= 00:08:08.867 1.00000% : 5999.065us 00:08:08.867 10.00000% : 6301.538us 00:08:08.867 25.00000% : 6604.012us 00:08:08.867 50.00000% : 6956.898us 00:08:08.867 75.00000% : 8166.794us 00:08:08.867 90.00000% : 8973.391us 00:08:08.867 95.00000% : 10384.935us 00:08:08.867 98.00000% : 12098.954us 00:08:08.867 99.00000% : 13611.323us 00:08:08.867 99.50000% : 27222.646us 00:08:08.867 99.90000% : 32062.228us 00:08:08.867 99.99000% : 32465.526us 00:08:08.867 99.99900% : 32465.526us 00:08:08.867 99.99990% : 32465.526us 00:08:08.867 99.99999% : 32465.526us 00:08:08.867 00:08:08.867 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:08.867 ================================================================================= 00:08:08.867 1.00000% : 5923.446us 00:08:08.867 10.00000% : 6276.332us 00:08:08.867 25.00000% : 6604.012us 00:08:08.867 50.00000% : 6956.898us 00:08:08.867 75.00000% : 8065.969us 00:08:08.867 90.00000% : 9124.628us 00:08:08.867 95.00000% : 10384.935us 00:08:08.867 98.00000% : 11947.717us 00:08:08.867 99.00000% : 13812.972us 00:08:08.867 99.50000% : 25811.102us 00:08:08.867 99.90000% : 30449.034us 00:08:08.867 99.99000% : 30852.332us 00:08:08.867 99.99900% : 30852.332us 00:08:08.867 99.99990% : 30852.332us 00:08:08.867 99.99999% : 30852.332us 00:08:08.867 00:08:08.867 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:08.867 ================================================================================= 00:08:08.867 1.00000% : 5948.652us 00:08:08.867 10.00000% : 6301.538us 00:08:08.867 25.00000% : 6604.012us 00:08:08.867 50.00000% : 6956.898us 00:08:08.867 75.00000% : 8116.382us 00:08:08.867 90.00000% : 9175.040us 00:08:08.867 95.00000% : 10334.523us 00:08:08.867 98.00000% : 11897.305us 00:08:08.867 99.00000% : 12905.551us 00:08:08.867 99.50000% : 23996.258us 00:08:08.867 99.90000% : 28634.191us 00:08:08.867 99.99000% : 29037.489us 00:08:08.867 99.99900% : 29037.489us 00:08:08.867 99.99990% : 29037.489us 00:08:08.867 99.99999% : 29037.489us 00:08:08.867 00:08:08.867 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:08.867 ================================================================================= 00:08:08.867 1.00000% : 5948.652us 00:08:08.867 10.00000% : 6301.538us 00:08:08.867 25.00000% : 6604.012us 00:08:08.867 50.00000% : 6956.898us 00:08:08.867 75.00000% : 8166.794us 00:08:08.867 90.00000% : 9124.628us 00:08:08.867 95.00000% : 10384.935us 00:08:08.867 98.00000% : 11897.305us 00:08:08.867 99.00000% : 12703.902us 00:08:08.867 99.50000% : 22383.065us 00:08:08.867 99.90000% : 27020.997us 00:08:08.867 99.99000% : 27222.646us 00:08:08.867 99.99900% : 27424.295us 00:08:08.867 99.99990% : 27424.295us 00:08:08.867 99.99999% : 27424.295us 00:08:08.867 00:08:08.867 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:08.867 ================================================================================= 00:08:08.867 1.00000% : 5999.065us 00:08:08.867 10.00000% : 6301.538us 00:08:08.867 25.00000% : 6604.012us 00:08:08.867 50.00000% : 6956.898us 00:08:08.867 75.00000% : 8116.382us 00:08:08.867 90.00000% : 9175.040us 00:08:08.867 95.00000% : 10384.935us 00:08:08.867 98.00000% : 11947.717us 00:08:08.867 99.00000% : 12855.138us 00:08:08.867 99.50000% : 17039.360us 00:08:08.867 99.90000% : 21778.117us 00:08:08.867 99.99000% : 22181.415us 00:08:08.867 99.99900% : 22181.415us 00:08:08.867 99.99990% : 22181.415us 00:08:08.867 99.99999% : 22181.415us 00:08:08.867 00:08:08.867 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:08:08.867 ============================================================================== 00:08:08.867 Range in us Cumulative IO count 00:08:08.867 5494.942 - 5520.148: 0.0059% ( 1) 00:08:08.867 5520.148 - 5545.354: 0.0413% ( 6) 00:08:08.867 5545.354 - 5570.560: 0.0472% ( 1) 00:08:08.867 5570.560 - 5595.766: 0.0825% ( 6) 00:08:08.867 5595.766 - 5620.972: 0.1474% ( 11) 00:08:08.867 5620.972 - 5646.178: 0.1769% ( 5) 00:08:08.867 5646.178 - 5671.385: 0.2123% ( 6) 00:08:08.867 5671.385 - 5696.591: 0.2358% ( 4) 00:08:08.867 5696.591 - 5721.797: 0.3007% ( 11) 00:08:08.867 5721.797 - 5747.003: 0.3950% ( 16) 00:08:08.867 5747.003 - 5772.209: 0.4717% ( 13) 00:08:08.867 5772.209 - 5797.415: 0.5719% ( 17) 00:08:08.867 5797.415 - 5822.622: 0.7017% ( 22) 00:08:08.867 5822.622 - 5847.828: 0.8844% ( 31) 00:08:08.867 5847.828 - 5873.034: 1.0849% ( 34) 00:08:08.867 5873.034 - 5898.240: 1.3267% ( 41) 00:08:08.867 5898.240 - 5923.446: 1.5035% ( 30) 00:08:08.867 5923.446 - 5948.652: 1.7099% ( 35) 00:08:08.867 5948.652 - 5973.858: 1.9222% ( 36) 00:08:08.867 5973.858 - 5999.065: 2.2406% ( 54) 00:08:08.867 5999.065 - 6024.271: 2.6592% ( 71) 00:08:08.867 6024.271 - 6049.477: 3.1604% ( 85) 00:08:08.867 6049.477 - 6074.683: 3.6910% ( 90) 00:08:08.867 6074.683 - 6099.889: 4.2866% ( 101) 00:08:08.867 6099.889 - 6125.095: 4.9351% ( 110) 00:08:08.867 6125.095 - 6150.302: 5.7075% ( 131) 00:08:08.867 6150.302 - 6175.508: 6.5802% ( 148) 00:08:08.867 6175.508 - 6200.714: 7.5177% ( 159) 00:08:08.867 6200.714 - 6225.920: 8.6203% ( 187) 00:08:08.867 6225.920 - 6251.126: 9.6816% ( 180) 00:08:08.867 6251.126 - 6276.332: 10.7960% ( 189) 00:08:08.867 6276.332 - 6301.538: 12.0696% ( 216) 00:08:08.867 6301.538 - 6326.745: 13.5613% ( 253) 00:08:08.867 6326.745 - 6351.951: 14.6050% ( 177) 00:08:08.867 6351.951 - 6377.157: 15.8019% ( 203) 00:08:08.867 6377.157 - 6402.363: 17.0106% ( 205) 00:08:08.867 6402.363 - 6427.569: 18.4021% ( 236) 00:08:08.867 6427.569 - 6452.775: 19.8467% ( 245) 00:08:08.867 6452.775 - 6503.188: 22.8715% ( 513) 00:08:08.867 6503.188 - 6553.600: 25.8962% ( 513) 00:08:08.867 6553.600 - 6604.012: 28.6616% ( 469) 00:08:08.867 6604.012 - 6654.425: 31.6392% ( 505) 00:08:08.867 6654.425 - 6704.837: 34.7818% ( 533) 00:08:08.867 6704.837 - 6755.249: 38.2311% ( 585) 00:08:08.867 6755.249 - 6805.662: 41.3797% ( 534) 00:08:08.867 6805.662 - 6856.074: 44.9233% ( 601) 00:08:08.867 6856.074 - 6906.486: 48.3137% ( 575) 00:08:08.867 6906.486 - 6956.898: 51.4151% ( 526) 00:08:08.867 6956.898 - 7007.311: 53.9976% ( 438) 00:08:08.867 7007.311 - 7057.723: 56.3738% ( 403) 00:08:08.867 7057.723 - 7108.135: 58.6085% ( 379) 00:08:08.867 7108.135 - 7158.548: 60.1533% ( 262) 00:08:08.867 7158.548 - 7208.960: 61.4741% ( 224) 00:08:08.867 7208.960 - 7259.372: 62.7889% ( 223) 00:08:08.867 7259.372 - 7309.785: 64.0979% ( 222) 00:08:08.867 7309.785 - 7360.197: 65.2123% ( 189) 00:08:08.867 7360.197 - 7410.609: 66.1969% ( 167) 00:08:08.867 7410.609 - 7461.022: 67.0932% ( 152) 00:08:08.867 7461.022 - 7511.434: 67.9304% ( 142) 00:08:08.867 7511.434 - 7561.846: 68.8149% ( 150) 00:08:08.867 7561.846 - 7612.258: 69.4517% ( 108) 00:08:08.867 7612.258 - 7662.671: 70.2123% ( 129) 00:08:08.867 7662.671 - 7713.083: 70.9021% ( 117) 00:08:08.867 7713.083 - 7763.495: 71.4858% ( 99) 00:08:08.867 7763.495 - 7813.908: 72.0283% ( 92) 00:08:08.867 7813.908 - 7864.320: 72.6651% ( 108) 00:08:08.867 7864.320 - 7914.732: 73.2960% ( 107) 00:08:08.867 7914.732 - 7965.145: 73.8502% ( 94) 00:08:08.867 7965.145 - 8015.557: 74.3396% ( 83) 00:08:08.867 8015.557 - 8065.969: 75.1592% ( 139) 00:08:08.867 8065.969 - 8116.382: 76.1616% ( 170) 00:08:08.867 8116.382 - 8166.794: 77.3880% ( 208) 00:08:08.867 8166.794 - 8217.206: 78.6733% ( 218) 00:08:08.867 8217.206 - 8267.618: 80.0413% ( 232) 00:08:08.867 8267.618 - 8318.031: 81.1851% ( 194) 00:08:08.867 8318.031 - 8368.443: 81.9163% ( 124) 00:08:08.867 8368.443 - 8418.855: 82.5236% ( 103) 00:08:08.867 8418.855 - 8469.268: 83.0307% ( 86) 00:08:08.867 8469.268 - 8519.680: 83.5259% ( 84) 00:08:08.868 8519.680 - 8570.092: 83.9092% ( 65) 00:08:08.868 8570.092 - 8620.505: 84.3042% ( 67) 00:08:08.868 8620.505 - 8670.917: 84.6934% ( 66) 00:08:08.868 8670.917 - 8721.329: 85.1120% ( 71) 00:08:08.868 8721.329 - 8771.742: 85.4894% ( 64) 00:08:08.868 8771.742 - 8822.154: 85.9611% ( 80) 00:08:08.868 8822.154 - 8872.566: 86.4387% ( 81) 00:08:08.868 8872.566 - 8922.978: 87.0637% ( 106) 00:08:08.868 8922.978 - 8973.391: 87.5649% ( 85) 00:08:08.868 8973.391 - 9023.803: 88.2547% ( 117) 00:08:08.868 9023.803 - 9074.215: 89.0094% ( 128) 00:08:08.868 9074.215 - 9124.628: 89.5342% ( 89) 00:08:08.868 9124.628 - 9175.040: 90.0472% ( 87) 00:08:08.868 9175.040 - 9225.452: 90.4835% ( 74) 00:08:08.868 9225.452 - 9275.865: 90.8373% ( 60) 00:08:08.868 9275.865 - 9326.277: 91.1733% ( 57) 00:08:08.868 9326.277 - 9376.689: 91.4387% ( 45) 00:08:08.868 9376.689 - 9427.102: 91.5861% ( 25) 00:08:08.868 9427.102 - 9477.514: 91.6922% ( 18) 00:08:08.868 9477.514 - 9527.926: 91.8455% ( 26) 00:08:08.868 9527.926 - 9578.338: 92.0637% ( 37) 00:08:08.868 9578.338 - 9628.751: 92.2406% ( 30) 00:08:08.868 9628.751 - 9679.163: 92.3113% ( 12) 00:08:08.868 9679.163 - 9729.575: 92.4646% ( 26) 00:08:08.868 9729.575 - 9779.988: 92.6120% ( 25) 00:08:08.868 9779.988 - 9830.400: 92.7771% ( 28) 00:08:08.868 9830.400 - 9880.812: 92.9835% ( 35) 00:08:08.868 9880.812 - 9931.225: 93.1604% ( 30) 00:08:08.868 9931.225 - 9981.637: 93.4080% ( 42) 00:08:08.868 9981.637 - 10032.049: 93.5731% ( 28) 00:08:08.868 10032.049 - 10082.462: 93.8561% ( 48) 00:08:08.868 10082.462 - 10132.874: 94.0684% ( 36) 00:08:08.868 10132.874 - 10183.286: 94.4222% ( 60) 00:08:08.868 10183.286 - 10233.698: 94.6639% ( 41) 00:08:08.868 10233.698 - 10284.111: 94.9057% ( 41) 00:08:08.868 10284.111 - 10334.523: 95.0413% ( 23) 00:08:08.868 10334.523 - 10384.935: 95.2005% ( 27) 00:08:08.868 10384.935 - 10435.348: 95.3243% ( 21) 00:08:08.868 10435.348 - 10485.760: 95.4953% ( 29) 00:08:08.868 10485.760 - 10536.172: 95.6604% ( 28) 00:08:08.868 10536.172 - 10586.585: 95.8491% ( 32) 00:08:08.868 10586.585 - 10636.997: 96.0200% ( 29) 00:08:08.868 10636.997 - 10687.409: 96.1910% ( 29) 00:08:08.868 10687.409 - 10737.822: 96.3561% ( 28) 00:08:08.868 10737.822 - 10788.234: 96.5802% ( 38) 00:08:08.868 10788.234 - 10838.646: 96.7571% ( 30) 00:08:08.868 10838.646 - 10889.058: 96.8868% ( 22) 00:08:08.868 10889.058 - 10939.471: 97.0106% ( 21) 00:08:08.868 10939.471 - 10989.883: 97.1108% ( 17) 00:08:08.868 10989.883 - 11040.295: 97.2170% ( 18) 00:08:08.868 11040.295 - 11090.708: 97.2406% ( 4) 00:08:08.868 11090.708 - 11141.120: 97.3290% ( 15) 00:08:08.868 11141.120 - 11191.532: 97.3585% ( 5) 00:08:08.868 11191.532 - 11241.945: 97.4057% ( 8) 00:08:08.868 11241.945 - 11292.357: 97.4587% ( 9) 00:08:08.868 11292.357 - 11342.769: 97.5177% ( 10) 00:08:08.868 11342.769 - 11393.182: 97.5590% ( 7) 00:08:08.868 11393.182 - 11443.594: 97.5943% ( 6) 00:08:08.868 11443.594 - 11494.006: 97.6120% ( 3) 00:08:08.868 11494.006 - 11544.418: 97.6415% ( 5) 00:08:08.868 11544.418 - 11594.831: 97.6769% ( 6) 00:08:08.868 11594.831 - 11645.243: 97.7123% ( 6) 00:08:08.868 11645.243 - 11695.655: 97.7476% ( 6) 00:08:08.868 11695.655 - 11746.068: 97.8007% ( 9) 00:08:08.868 11746.068 - 11796.480: 97.8538% ( 9) 00:08:08.868 11796.480 - 11846.892: 97.9127% ( 10) 00:08:08.868 11846.892 - 11897.305: 98.0189% ( 18) 00:08:08.868 11897.305 - 11947.717: 98.0542% ( 6) 00:08:08.868 11947.717 - 11998.129: 98.0896% ( 6) 00:08:08.868 11998.129 - 12048.542: 98.1073% ( 3) 00:08:08.868 12048.542 - 12098.954: 98.1309% ( 4) 00:08:08.868 12098.954 - 12149.366: 98.1604% ( 5) 00:08:08.868 12149.366 - 12199.778: 98.2075% ( 8) 00:08:08.868 12199.778 - 12250.191: 98.2370% ( 5) 00:08:08.868 12250.191 - 12300.603: 98.2842% ( 8) 00:08:08.868 12300.603 - 12351.015: 98.3255% ( 7) 00:08:08.868 12351.015 - 12401.428: 98.3667% ( 7) 00:08:08.868 12401.428 - 12451.840: 98.4198% ( 9) 00:08:08.868 12451.840 - 12502.252: 98.4552% ( 6) 00:08:08.868 12502.252 - 12552.665: 98.4847% ( 5) 00:08:08.868 12552.665 - 12603.077: 98.5318% ( 8) 00:08:08.868 12603.077 - 12653.489: 98.5731% ( 7) 00:08:08.868 12653.489 - 12703.902: 98.6085% ( 6) 00:08:08.868 12703.902 - 12754.314: 98.6439% ( 6) 00:08:08.868 12754.314 - 12804.726: 98.6733% ( 5) 00:08:08.868 12804.726 - 12855.138: 98.7323% ( 10) 00:08:08.868 12855.138 - 12905.551: 98.7913% ( 10) 00:08:08.868 12905.551 - 13006.375: 98.8738% ( 14) 00:08:08.868 13006.375 - 13107.200: 98.8797% ( 1) 00:08:08.868 13107.200 - 13208.025: 98.9033% ( 4) 00:08:08.868 13208.025 - 13308.849: 98.9328% ( 5) 00:08:08.868 13308.849 - 13409.674: 98.9623% ( 5) 00:08:08.868 13409.674 - 13510.498: 98.9800% ( 3) 00:08:08.868 13510.498 - 13611.323: 99.1038% ( 21) 00:08:08.868 13611.323 - 13712.148: 99.1274% ( 4) 00:08:08.868 13712.148 - 13812.972: 99.1568% ( 5) 00:08:08.868 13812.972 - 13913.797: 99.1981% ( 7) 00:08:08.868 13913.797 - 14014.622: 99.2217% ( 4) 00:08:08.868 14014.622 - 14115.446: 99.2335% ( 2) 00:08:08.868 14115.446 - 14216.271: 99.2453% ( 2) 00:08:08.868 27827.594 - 28029.243: 99.2807% ( 6) 00:08:08.868 28029.243 - 28230.892: 99.3278% ( 8) 00:08:08.868 28230.892 - 28432.542: 99.3691% ( 7) 00:08:08.868 28432.542 - 28634.191: 99.4163% ( 8) 00:08:08.868 28634.191 - 28835.840: 99.4634% ( 8) 00:08:08.868 28835.840 - 29037.489: 99.5047% ( 7) 00:08:08.868 29037.489 - 29239.138: 99.5519% ( 8) 00:08:08.868 29239.138 - 29440.788: 99.5932% ( 7) 00:08:08.868 29440.788 - 29642.437: 99.6226% ( 5) 00:08:08.868 32465.526 - 32667.175: 99.6462% ( 4) 00:08:08.868 32667.175 - 32868.825: 99.6934% ( 8) 00:08:08.868 32868.825 - 33070.474: 99.7347% ( 7) 00:08:08.868 33070.474 - 33272.123: 99.7818% ( 8) 00:08:08.868 33272.123 - 33473.772: 99.8231% ( 7) 00:08:08.868 33473.772 - 33675.422: 99.8762% ( 9) 00:08:08.868 33675.422 - 33877.071: 99.9116% ( 6) 00:08:08.868 33877.071 - 34078.720: 99.9646% ( 9) 00:08:08.868 34078.720 - 34280.369: 100.0000% ( 6) 00:08:08.868 00:08:08.868 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:08:08.868 ============================================================================== 00:08:08.868 Range in us Cumulative IO count 00:08:08.868 5520.148 - 5545.354: 0.0059% ( 1) 00:08:08.868 5620.972 - 5646.178: 0.0118% ( 1) 00:08:08.868 5721.797 - 5747.003: 0.0236% ( 2) 00:08:08.868 5747.003 - 5772.209: 0.0413% ( 3) 00:08:08.868 5772.209 - 5797.415: 0.0590% ( 3) 00:08:08.868 5797.415 - 5822.622: 0.0825% ( 4) 00:08:08.868 5822.622 - 5847.828: 0.1238% ( 7) 00:08:08.868 5847.828 - 5873.034: 0.2182% ( 16) 00:08:08.868 5873.034 - 5898.240: 0.4540% ( 40) 00:08:08.868 5898.240 - 5923.446: 0.7075% ( 43) 00:08:08.868 5923.446 - 5948.652: 0.8432% ( 23) 00:08:08.868 5948.652 - 5973.858: 0.9198% ( 13) 00:08:08.868 5973.858 - 5999.065: 1.2264% ( 52) 00:08:08.868 5999.065 - 6024.271: 1.5330% ( 52) 00:08:08.868 6024.271 - 6049.477: 1.7276% ( 33) 00:08:08.868 6049.477 - 6074.683: 2.0106% ( 48) 00:08:08.868 6074.683 - 6099.889: 2.7358% ( 123) 00:08:08.868 6099.889 - 6125.095: 3.1663% ( 73) 00:08:08.868 6125.095 - 6150.302: 3.7559% ( 100) 00:08:08.868 6150.302 - 6175.508: 4.5224% ( 130) 00:08:08.868 6175.508 - 6200.714: 5.4245% ( 153) 00:08:08.868 6200.714 - 6225.920: 6.5684% ( 194) 00:08:08.868 6225.920 - 6251.126: 7.9776% ( 239) 00:08:08.868 6251.126 - 6276.332: 9.9292% ( 331) 00:08:08.868 6276.332 - 6301.538: 11.5507% ( 275) 00:08:08.868 6301.538 - 6326.745: 12.8597% ( 222) 00:08:08.868 6326.745 - 6351.951: 14.1627% ( 221) 00:08:08.868 6351.951 - 6377.157: 15.6604% ( 254) 00:08:08.868 6377.157 - 6402.363: 16.9222% ( 214) 00:08:08.868 6402.363 - 6427.569: 18.4847% ( 265) 00:08:08.868 6427.569 - 6452.775: 19.6050% ( 190) 00:08:08.868 6452.775 - 6503.188: 22.1050% ( 424) 00:08:08.868 6503.188 - 6553.600: 24.5991% ( 423) 00:08:08.868 6553.600 - 6604.012: 27.9422% ( 567) 00:08:08.868 6604.012 - 6654.425: 31.1498% ( 544) 00:08:08.868 6654.425 - 6704.837: 33.7913% ( 448) 00:08:08.868 6704.837 - 6755.249: 37.2052% ( 579) 00:08:08.868 6755.249 - 6805.662: 41.0436% ( 651) 00:08:08.868 6805.662 - 6856.074: 44.7052% ( 621) 00:08:08.868 6856.074 - 6906.486: 48.4552% ( 636) 00:08:08.868 6906.486 - 6956.898: 51.9693% ( 596) 00:08:08.868 6956.898 - 7007.311: 55.4068% ( 583) 00:08:08.868 7007.311 - 7057.723: 58.6144% ( 544) 00:08:08.868 7057.723 - 7108.135: 60.7370% ( 360) 00:08:08.868 7108.135 - 7158.548: 62.7064% ( 334) 00:08:08.868 7158.548 - 7208.960: 63.8090% ( 187) 00:08:08.868 7208.960 - 7259.372: 64.7877% ( 166) 00:08:08.868 7259.372 - 7309.785: 65.8844% ( 186) 00:08:08.868 7309.785 - 7360.197: 66.7571% ( 148) 00:08:08.868 7360.197 - 7410.609: 67.2700% ( 87) 00:08:08.868 7410.609 - 7461.022: 67.7594% ( 83) 00:08:08.868 7461.022 - 7511.434: 68.1191% ( 61) 00:08:08.868 7511.434 - 7561.846: 68.5613% ( 75) 00:08:08.868 7561.846 - 7612.258: 69.0684% ( 86) 00:08:08.868 7612.258 - 7662.671: 69.6226% ( 94) 00:08:08.868 7662.671 - 7713.083: 70.0708% ( 76) 00:08:08.868 7713.083 - 7763.495: 70.3950% ( 55) 00:08:08.868 7763.495 - 7813.908: 70.8844% ( 83) 00:08:08.868 7813.908 - 7864.320: 71.2736% ( 66) 00:08:08.868 7864.320 - 7914.732: 71.8573% ( 99) 00:08:08.868 7914.732 - 7965.145: 72.6592% ( 136) 00:08:08.868 7965.145 - 8015.557: 73.1486% ( 83) 00:08:08.868 8015.557 - 8065.969: 73.9858% ( 142) 00:08:08.868 8065.969 - 8116.382: 74.5991% ( 104) 00:08:08.868 8116.382 - 8166.794: 75.2417% ( 109) 00:08:08.868 8166.794 - 8217.206: 76.0377% ( 135) 00:08:08.868 8217.206 - 8267.618: 77.1050% ( 181) 00:08:08.868 8267.618 - 8318.031: 78.1368% ( 175) 00:08:08.868 8318.031 - 8368.443: 79.2394% ( 187) 00:08:08.869 8368.443 - 8418.855: 80.7665% ( 259) 00:08:08.869 8418.855 - 8469.268: 82.2111% ( 245) 00:08:08.869 8469.268 - 8519.680: 83.6144% ( 238) 00:08:08.869 8519.680 - 8570.092: 85.1297% ( 257) 00:08:08.869 8570.092 - 8620.505: 86.0672% ( 159) 00:08:08.869 8620.505 - 8670.917: 86.6863% ( 105) 00:08:08.869 8670.917 - 8721.329: 87.2229% ( 91) 00:08:08.869 8721.329 - 8771.742: 87.8007% ( 98) 00:08:08.869 8771.742 - 8822.154: 88.3903% ( 100) 00:08:08.869 8822.154 - 8872.566: 88.9269% ( 91) 00:08:08.869 8872.566 - 8922.978: 89.5519% ( 106) 00:08:08.869 8922.978 - 8973.391: 90.0825% ( 90) 00:08:08.869 8973.391 - 9023.803: 90.4422% ( 61) 00:08:08.869 9023.803 - 9074.215: 90.7252% ( 48) 00:08:08.869 9074.215 - 9124.628: 90.9139% ( 32) 00:08:08.869 9124.628 - 9175.040: 91.0436% ( 22) 00:08:08.869 9175.040 - 9225.452: 91.1675% ( 21) 00:08:08.869 9225.452 - 9275.865: 91.3679% ( 34) 00:08:08.869 9275.865 - 9326.277: 91.6686% ( 51) 00:08:08.869 9326.277 - 9376.689: 92.0460% ( 64) 00:08:08.869 9376.689 - 9427.102: 92.2995% ( 43) 00:08:08.869 9427.102 - 9477.514: 92.3939% ( 16) 00:08:08.869 9477.514 - 9527.926: 92.4528% ( 10) 00:08:08.869 9527.926 - 9578.338: 92.4882% ( 6) 00:08:08.869 9578.338 - 9628.751: 92.5354% ( 8) 00:08:08.869 9628.751 - 9679.163: 92.6238% ( 15) 00:08:08.869 9679.163 - 9729.575: 92.8950% ( 46) 00:08:08.869 9729.575 - 9779.988: 93.1191% ( 38) 00:08:08.869 9779.988 - 9830.400: 93.3196% ( 34) 00:08:08.869 9830.400 - 9880.812: 93.3962% ( 13) 00:08:08.869 9880.812 - 9931.225: 93.4611% ( 11) 00:08:08.869 9931.225 - 9981.637: 93.5672% ( 18) 00:08:08.869 9981.637 - 10032.049: 93.6557% ( 15) 00:08:08.869 10032.049 - 10082.462: 93.7500% ( 16) 00:08:08.869 10082.462 - 10132.874: 93.8738% ( 21) 00:08:08.869 10132.874 - 10183.286: 94.0212% ( 25) 00:08:08.869 10183.286 - 10233.698: 94.2158% ( 33) 00:08:08.869 10233.698 - 10284.111: 94.3868% ( 29) 00:08:08.869 10284.111 - 10334.523: 94.7347% ( 59) 00:08:08.869 10334.523 - 10384.935: 95.0472% ( 53) 00:08:08.869 10384.935 - 10435.348: 95.3007% ( 43) 00:08:08.869 10435.348 - 10485.760: 95.4481% ( 25) 00:08:08.869 10485.760 - 10536.172: 95.6604% ( 36) 00:08:08.869 10536.172 - 10586.585: 95.8314% ( 29) 00:08:08.869 10586.585 - 10636.997: 95.9729% ( 24) 00:08:08.869 10636.997 - 10687.409: 96.1085% ( 23) 00:08:08.869 10687.409 - 10737.822: 96.2441% ( 23) 00:08:08.869 10737.822 - 10788.234: 96.3384% ( 16) 00:08:08.869 10788.234 - 10838.646: 96.4564% ( 20) 00:08:08.869 10838.646 - 10889.058: 96.5684% ( 19) 00:08:08.869 10889.058 - 10939.471: 96.6686% ( 17) 00:08:08.869 10939.471 - 10989.883: 96.7453% ( 13) 00:08:08.869 10989.883 - 11040.295: 96.8160% ( 12) 00:08:08.869 11040.295 - 11090.708: 96.8986% ( 14) 00:08:08.869 11090.708 - 11141.120: 96.9929% ( 16) 00:08:08.869 11141.120 - 11191.532: 97.1462% ( 26) 00:08:08.869 11191.532 - 11241.945: 97.1934% ( 8) 00:08:08.869 11241.945 - 11292.357: 97.2583% ( 11) 00:08:08.869 11292.357 - 11342.769: 97.3231% ( 11) 00:08:08.869 11342.769 - 11393.182: 97.3762% ( 9) 00:08:08.869 11393.182 - 11443.594: 97.4705% ( 16) 00:08:08.869 11443.594 - 11494.006: 97.5531% ( 14) 00:08:08.869 11494.006 - 11544.418: 97.6120% ( 10) 00:08:08.869 11544.418 - 11594.831: 97.6533% ( 7) 00:08:08.869 11594.831 - 11645.243: 97.6828% ( 5) 00:08:08.869 11645.243 - 11695.655: 97.7064% ( 4) 00:08:08.869 11695.655 - 11746.068: 97.7300% ( 4) 00:08:08.869 11746.068 - 11796.480: 97.7417% ( 2) 00:08:08.869 11796.480 - 11846.892: 97.7594% ( 3) 00:08:08.869 11846.892 - 11897.305: 97.7830% ( 4) 00:08:08.869 11897.305 - 11947.717: 97.8420% ( 10) 00:08:08.869 11947.717 - 11998.129: 97.8892% ( 8) 00:08:08.869 11998.129 - 12048.542: 97.9245% ( 6) 00:08:08.869 12048.542 - 12098.954: 98.0071% ( 14) 00:08:08.869 12098.954 - 12149.366: 98.0660% ( 10) 00:08:08.869 12149.366 - 12199.778: 98.1309% ( 11) 00:08:08.869 12199.778 - 12250.191: 98.1899% ( 10) 00:08:08.869 12250.191 - 12300.603: 98.2547% ( 11) 00:08:08.869 12300.603 - 12351.015: 98.2901% ( 6) 00:08:08.869 12351.015 - 12401.428: 98.3137% ( 4) 00:08:08.869 12401.428 - 12451.840: 98.3491% ( 6) 00:08:08.869 12451.840 - 12502.252: 98.4080% ( 10) 00:08:08.869 12502.252 - 12552.665: 98.4906% ( 14) 00:08:08.869 12552.665 - 12603.077: 98.6439% ( 26) 00:08:08.869 12603.077 - 12653.489: 98.6910% ( 8) 00:08:08.869 12653.489 - 12703.902: 98.7205% ( 5) 00:08:08.869 12703.902 - 12754.314: 98.7559% ( 6) 00:08:08.869 12754.314 - 12804.726: 98.7913% ( 6) 00:08:08.869 12804.726 - 12855.138: 98.8325% ( 7) 00:08:08.869 12855.138 - 12905.551: 98.8620% ( 5) 00:08:08.869 12905.551 - 13006.375: 98.8679% ( 1) 00:08:08.869 13409.674 - 13510.498: 98.9387% ( 12) 00:08:08.869 13510.498 - 13611.323: 99.1333% ( 33) 00:08:08.869 13611.323 - 13712.148: 99.1922% ( 10) 00:08:08.869 13712.148 - 13812.972: 99.2276% ( 6) 00:08:08.869 13812.972 - 13913.797: 99.2453% ( 3) 00:08:08.869 26012.751 - 26214.400: 99.2689% ( 4) 00:08:08.869 26214.400 - 26416.049: 99.3160% ( 8) 00:08:08.869 26416.049 - 26617.698: 99.3632% ( 8) 00:08:08.869 26617.698 - 26819.348: 99.4163% ( 9) 00:08:08.869 26819.348 - 27020.997: 99.4634% ( 8) 00:08:08.869 27020.997 - 27222.646: 99.5047% ( 7) 00:08:08.869 27222.646 - 27424.295: 99.5578% ( 9) 00:08:08.869 27424.295 - 27625.945: 99.6050% ( 8) 00:08:08.869 27625.945 - 27827.594: 99.6226% ( 3) 00:08:08.869 30650.683 - 30852.332: 99.6403% ( 3) 00:08:08.869 30852.332 - 31053.982: 99.6875% ( 8) 00:08:08.869 31053.982 - 31255.631: 99.7347% ( 8) 00:08:08.869 31255.631 - 31457.280: 99.7818% ( 8) 00:08:08.869 31457.280 - 31658.929: 99.8290% ( 8) 00:08:08.869 31658.929 - 31860.578: 99.8762% ( 8) 00:08:08.869 31860.578 - 32062.228: 99.9175% ( 7) 00:08:08.869 32062.228 - 32263.877: 99.9705% ( 9) 00:08:08.869 32263.877 - 32465.526: 100.0000% ( 5) 00:08:08.869 00:08:08.869 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:08:08.869 ============================================================================== 00:08:08.869 Range in us Cumulative IO count 00:08:08.869 5520.148 - 5545.354: 0.0118% ( 2) 00:08:08.869 5595.766 - 5620.972: 0.0177% ( 1) 00:08:08.869 5646.178 - 5671.385: 0.0236% ( 1) 00:08:08.869 5696.591 - 5721.797: 0.0472% ( 4) 00:08:08.869 5721.797 - 5747.003: 0.0825% ( 6) 00:08:08.869 5747.003 - 5772.209: 0.1356% ( 9) 00:08:08.869 5772.209 - 5797.415: 0.1946% ( 10) 00:08:08.869 5797.415 - 5822.622: 0.3007% ( 18) 00:08:08.869 5822.622 - 5847.828: 0.4127% ( 19) 00:08:08.869 5847.828 - 5873.034: 0.6191% ( 35) 00:08:08.869 5873.034 - 5898.240: 0.9257% ( 52) 00:08:08.869 5898.240 - 5923.446: 1.1675% ( 41) 00:08:08.869 5923.446 - 5948.652: 1.4741% ( 52) 00:08:08.869 5948.652 - 5973.858: 1.9988% ( 89) 00:08:08.869 5973.858 - 5999.065: 2.3172% ( 54) 00:08:08.869 5999.065 - 6024.271: 2.8243% ( 86) 00:08:08.869 6024.271 - 6049.477: 3.2429% ( 71) 00:08:08.869 6049.477 - 6074.683: 3.6792% ( 74) 00:08:08.869 6074.683 - 6099.889: 4.2512% ( 97) 00:08:08.869 6099.889 - 6125.095: 4.9410% ( 117) 00:08:08.869 6125.095 - 6150.302: 5.6663% ( 123) 00:08:08.869 6150.302 - 6175.508: 6.7630% ( 186) 00:08:08.869 6175.508 - 6200.714: 7.3998% ( 108) 00:08:08.869 6200.714 - 6225.920: 8.1427% ( 126) 00:08:08.869 6225.920 - 6251.126: 9.4281% ( 218) 00:08:08.869 6251.126 - 6276.332: 10.9611% ( 260) 00:08:08.869 6276.332 - 6301.538: 11.8573% ( 152) 00:08:08.869 6301.538 - 6326.745: 12.9304% ( 182) 00:08:08.869 6326.745 - 6351.951: 14.2040% ( 216) 00:08:08.869 6351.951 - 6377.157: 15.3950% ( 202) 00:08:08.869 6377.157 - 6402.363: 16.6922% ( 220) 00:08:08.869 6402.363 - 6427.569: 17.9776% ( 218) 00:08:08.869 6427.569 - 6452.775: 19.3396% ( 231) 00:08:08.869 6452.775 - 6503.188: 21.7748% ( 413) 00:08:08.869 6503.188 - 6553.600: 24.0389% ( 384) 00:08:08.869 6553.600 - 6604.012: 26.6568% ( 444) 00:08:08.869 6604.012 - 6654.425: 29.4988% ( 482) 00:08:08.869 6654.425 - 6704.837: 32.6061% ( 527) 00:08:08.869 6704.837 - 6755.249: 35.6958% ( 524) 00:08:08.869 6755.249 - 6805.662: 38.5672% ( 487) 00:08:08.869 6805.662 - 6856.074: 43.4611% ( 830) 00:08:08.869 6856.074 - 6906.486: 47.3054% ( 652) 00:08:08.869 6906.486 - 6956.898: 51.8101% ( 764) 00:08:08.869 6956.898 - 7007.311: 55.0413% ( 548) 00:08:08.869 7007.311 - 7057.723: 57.4705% ( 412) 00:08:08.869 7057.723 - 7108.135: 59.6344% ( 367) 00:08:08.869 7108.135 - 7158.548: 61.5743% ( 329) 00:08:08.869 7158.548 - 7208.960: 62.8950% ( 224) 00:08:08.869 7208.960 - 7259.372: 64.1627% ( 215) 00:08:08.869 7259.372 - 7309.785: 65.1887% ( 174) 00:08:08.869 7309.785 - 7360.197: 66.1498% ( 163) 00:08:08.869 7360.197 - 7410.609: 67.1050% ( 162) 00:08:08.869 7410.609 - 7461.022: 67.8774% ( 131) 00:08:08.869 7461.022 - 7511.434: 68.6203% ( 126) 00:08:08.869 7511.434 - 7561.846: 69.1922% ( 97) 00:08:08.869 7561.846 - 7612.258: 69.9292% ( 125) 00:08:08.869 7612.258 - 7662.671: 70.4127% ( 82) 00:08:08.869 7662.671 - 7713.083: 70.8255% ( 70) 00:08:08.869 7713.083 - 7763.495: 71.4564% ( 107) 00:08:08.869 7763.495 - 7813.908: 72.2052% ( 127) 00:08:08.869 7813.908 - 7864.320: 72.8833% ( 115) 00:08:08.869 7864.320 - 7914.732: 73.7795% ( 152) 00:08:08.869 7914.732 - 7965.145: 74.2630% ( 82) 00:08:08.869 7965.145 - 8015.557: 74.5932% ( 56) 00:08:08.869 8015.557 - 8065.969: 75.0177% ( 72) 00:08:08.869 8065.969 - 8116.382: 75.6191% ( 102) 00:08:08.869 8116.382 - 8166.794: 76.3620% ( 126) 00:08:08.869 8166.794 - 8217.206: 77.4116% ( 178) 00:08:08.869 8217.206 - 8267.618: 78.6675% ( 213) 00:08:08.869 8267.618 - 8318.031: 79.8585% ( 202) 00:08:08.869 8318.031 - 8368.443: 80.9434% ( 184) 00:08:08.869 8368.443 - 8418.855: 81.9634% ( 173) 00:08:08.869 8418.855 - 8469.268: 82.8361% ( 148) 00:08:08.870 8469.268 - 8519.680: 83.5849% ( 127) 00:08:08.870 8519.680 - 8570.092: 84.2807% ( 118) 00:08:08.870 8570.092 - 8620.505: 84.7936% ( 87) 00:08:08.870 8620.505 - 8670.917: 85.3243% ( 90) 00:08:08.870 8670.917 - 8721.329: 85.9080% ( 99) 00:08:08.870 8721.329 - 8771.742: 86.5094% ( 102) 00:08:08.870 8771.742 - 8822.154: 87.2052% ( 118) 00:08:08.870 8822.154 - 8872.566: 87.8892% ( 116) 00:08:08.870 8872.566 - 8922.978: 88.3785% ( 83) 00:08:08.870 8922.978 - 8973.391: 88.8797% ( 85) 00:08:08.870 8973.391 - 9023.803: 89.3868% ( 86) 00:08:08.870 9023.803 - 9074.215: 89.8172% ( 73) 00:08:08.870 9074.215 - 9124.628: 90.2535% ( 74) 00:08:08.870 9124.628 - 9175.040: 90.5189% ( 45) 00:08:08.870 9175.040 - 9225.452: 90.7606% ( 41) 00:08:08.870 9225.452 - 9275.865: 91.0554% ( 50) 00:08:08.870 9275.865 - 9326.277: 91.2500% ( 33) 00:08:08.870 9326.277 - 9376.689: 91.4269% ( 30) 00:08:08.870 9376.689 - 9427.102: 91.5861% ( 27) 00:08:08.870 9427.102 - 9477.514: 91.7689% ( 31) 00:08:08.870 9477.514 - 9527.926: 92.0460% ( 47) 00:08:08.870 9527.926 - 9578.338: 92.2995% ( 43) 00:08:08.870 9578.338 - 9628.751: 92.5413% ( 41) 00:08:08.870 9628.751 - 9679.163: 92.8715% ( 56) 00:08:08.870 9679.163 - 9729.575: 93.0778% ( 35) 00:08:08.870 9729.575 - 9779.988: 93.2724% ( 33) 00:08:08.870 9779.988 - 9830.400: 93.4552% ( 31) 00:08:08.870 9830.400 - 9880.812: 93.5731% ( 20) 00:08:08.870 9880.812 - 9931.225: 93.6969% ( 21) 00:08:08.870 9931.225 - 9981.637: 93.8031% ( 18) 00:08:08.870 9981.637 - 10032.049: 93.9092% ( 18) 00:08:08.870 10032.049 - 10082.462: 94.0153% ( 18) 00:08:08.870 10082.462 - 10132.874: 94.2453% ( 39) 00:08:08.870 10132.874 - 10183.286: 94.3750% ( 22) 00:08:08.870 10183.286 - 10233.698: 94.5047% ( 22) 00:08:08.870 10233.698 - 10284.111: 94.6226% ( 20) 00:08:08.870 10284.111 - 10334.523: 94.7642% ( 24) 00:08:08.870 10334.523 - 10384.935: 95.0531% ( 49) 00:08:08.870 10384.935 - 10435.348: 95.2889% ( 40) 00:08:08.870 10435.348 - 10485.760: 95.3950% ( 18) 00:08:08.870 10485.760 - 10536.172: 95.5248% ( 22) 00:08:08.870 10536.172 - 10586.585: 95.7901% ( 45) 00:08:08.870 10586.585 - 10636.997: 95.8785% ( 15) 00:08:08.870 10636.997 - 10687.409: 95.9906% ( 19) 00:08:08.870 10687.409 - 10737.822: 96.0731% ( 14) 00:08:08.870 10737.822 - 10788.234: 96.1616% ( 15) 00:08:08.870 10788.234 - 10838.646: 96.2500% ( 15) 00:08:08.870 10838.646 - 10889.058: 96.3267% ( 13) 00:08:08.870 10889.058 - 10939.471: 96.3738% ( 8) 00:08:08.870 10939.471 - 10989.883: 96.4033% ( 5) 00:08:08.870 10989.883 - 11040.295: 96.4741% ( 12) 00:08:08.870 11040.295 - 11090.708: 96.5625% ( 15) 00:08:08.870 11090.708 - 11141.120: 96.6981% ( 23) 00:08:08.870 11141.120 - 11191.532: 96.8927% ( 33) 00:08:08.870 11191.532 - 11241.945: 97.0283% ( 23) 00:08:08.870 11241.945 - 11292.357: 97.2700% ( 41) 00:08:08.870 11292.357 - 11342.769: 97.3644% ( 16) 00:08:08.870 11342.769 - 11393.182: 97.4410% ( 13) 00:08:08.870 11393.182 - 11443.594: 97.5177% ( 13) 00:08:08.870 11443.594 - 11494.006: 97.6002% ( 14) 00:08:08.870 11494.006 - 11544.418: 97.6592% ( 10) 00:08:08.870 11544.418 - 11594.831: 97.7182% ( 10) 00:08:08.870 11594.831 - 11645.243: 97.7830% ( 11) 00:08:08.870 11645.243 - 11695.655: 97.8184% ( 6) 00:08:08.870 11695.655 - 11746.068: 97.8538% ( 6) 00:08:08.870 11746.068 - 11796.480: 97.8892% ( 6) 00:08:08.870 11796.480 - 11846.892: 97.9245% ( 6) 00:08:08.870 11846.892 - 11897.305: 97.9658% ( 7) 00:08:08.870 11897.305 - 11947.717: 98.0071% ( 7) 00:08:08.870 11947.717 - 11998.129: 98.0601% ( 9) 00:08:08.870 11998.129 - 12048.542: 98.1250% ( 11) 00:08:08.870 12048.542 - 12098.954: 98.2075% ( 14) 00:08:08.870 12098.954 - 12149.366: 98.2488% ( 7) 00:08:08.870 12149.366 - 12199.778: 98.2724% ( 4) 00:08:08.870 12199.778 - 12250.191: 98.3196% ( 8) 00:08:08.870 12250.191 - 12300.603: 98.3608% ( 7) 00:08:08.870 12300.603 - 12351.015: 98.4493% ( 15) 00:08:08.870 12351.015 - 12401.428: 98.4788% ( 5) 00:08:08.870 12401.428 - 12451.840: 98.5200% ( 7) 00:08:08.870 12451.840 - 12502.252: 98.5436% ( 4) 00:08:08.870 12502.252 - 12552.665: 98.5967% ( 9) 00:08:08.870 12552.665 - 12603.077: 98.6439% ( 8) 00:08:08.870 12603.077 - 12653.489: 98.6733% ( 5) 00:08:08.870 12653.489 - 12703.902: 98.7087% ( 6) 00:08:08.870 12703.902 - 12754.314: 98.7441% ( 6) 00:08:08.870 12754.314 - 12804.726: 98.7854% ( 7) 00:08:08.870 12804.726 - 12855.138: 98.8031% ( 3) 00:08:08.870 12855.138 - 12905.551: 98.8208% ( 3) 00:08:08.870 12905.551 - 13006.375: 98.8443% ( 4) 00:08:08.870 13006.375 - 13107.200: 98.8620% ( 3) 00:08:08.870 13208.025 - 13308.849: 98.8679% ( 1) 00:08:08.870 13409.674 - 13510.498: 98.8738% ( 1) 00:08:08.870 13510.498 - 13611.323: 98.8797% ( 1) 00:08:08.870 13611.323 - 13712.148: 98.9269% ( 8) 00:08:08.870 13712.148 - 13812.972: 99.1863% ( 44) 00:08:08.870 13812.972 - 13913.797: 99.2217% ( 6) 00:08:08.870 13913.797 - 14014.622: 99.2453% ( 4) 00:08:08.870 24601.206 - 24702.031: 99.2571% ( 2) 00:08:08.870 24702.031 - 24802.855: 99.2748% ( 3) 00:08:08.870 24802.855 - 24903.680: 99.2983% ( 4) 00:08:08.870 24903.680 - 25004.505: 99.3219% ( 4) 00:08:08.870 25004.505 - 25105.329: 99.3455% ( 4) 00:08:08.870 25105.329 - 25206.154: 99.3691% ( 4) 00:08:08.870 25206.154 - 25306.978: 99.3927% ( 4) 00:08:08.870 25306.978 - 25407.803: 99.4163% ( 4) 00:08:08.870 25407.803 - 25508.628: 99.4458% ( 5) 00:08:08.870 25508.628 - 25609.452: 99.4693% ( 4) 00:08:08.870 25609.452 - 25710.277: 99.4929% ( 4) 00:08:08.870 25710.277 - 25811.102: 99.5165% ( 4) 00:08:08.870 25811.102 - 26012.751: 99.5637% ( 8) 00:08:08.870 26012.751 - 26214.400: 99.6108% ( 8) 00:08:08.870 26214.400 - 26416.049: 99.6226% ( 2) 00:08:08.870 29037.489 - 29239.138: 99.6403% ( 3) 00:08:08.870 29239.138 - 29440.788: 99.6875% ( 8) 00:08:08.870 29440.788 - 29642.437: 99.7347% ( 8) 00:08:08.870 29642.437 - 29844.086: 99.7818% ( 8) 00:08:08.870 29844.086 - 30045.735: 99.8349% ( 9) 00:08:08.870 30045.735 - 30247.385: 99.8821% ( 8) 00:08:08.870 30247.385 - 30449.034: 99.9292% ( 8) 00:08:08.870 30449.034 - 30650.683: 99.9764% ( 8) 00:08:08.870 30650.683 - 30852.332: 100.0000% ( 4) 00:08:08.870 00:08:08.870 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:08:08.870 ============================================================================== 00:08:08.870 Range in us Cumulative IO count 00:08:08.870 5469.735 - 5494.942: 0.0059% ( 1) 00:08:08.870 5520.148 - 5545.354: 0.0118% ( 1) 00:08:08.870 5646.178 - 5671.385: 0.0177% ( 1) 00:08:08.870 5696.591 - 5721.797: 0.0354% ( 3) 00:08:08.870 5721.797 - 5747.003: 0.0472% ( 2) 00:08:08.870 5747.003 - 5772.209: 0.1002% ( 9) 00:08:08.870 5772.209 - 5797.415: 0.1592% ( 10) 00:08:08.870 5797.415 - 5822.622: 0.2300% ( 12) 00:08:08.870 5822.622 - 5847.828: 0.3243% ( 16) 00:08:08.870 5847.828 - 5873.034: 0.5660% ( 41) 00:08:08.870 5873.034 - 5898.240: 0.7842% ( 37) 00:08:08.870 5898.240 - 5923.446: 0.9906% ( 35) 00:08:08.870 5923.446 - 5948.652: 1.2736% ( 48) 00:08:08.870 5948.652 - 5973.858: 1.6627% ( 66) 00:08:08.870 5973.858 - 5999.065: 2.0519% ( 66) 00:08:08.870 5999.065 - 6024.271: 2.3467% ( 50) 00:08:08.870 6024.271 - 6049.477: 2.7594% ( 70) 00:08:08.870 6049.477 - 6074.683: 3.3019% ( 92) 00:08:08.870 6074.683 - 6099.889: 3.7205% ( 71) 00:08:08.870 6099.889 - 6125.095: 4.1450% ( 72) 00:08:08.870 6125.095 - 6150.302: 4.7170% ( 97) 00:08:08.870 6150.302 - 6175.508: 5.4894% ( 131) 00:08:08.870 6175.508 - 6200.714: 6.2559% ( 130) 00:08:08.870 6200.714 - 6225.920: 7.2288% ( 165) 00:08:08.870 6225.920 - 6251.126: 8.3903% ( 197) 00:08:08.870 6251.126 - 6276.332: 9.4222% ( 175) 00:08:08.870 6276.332 - 6301.538: 10.8196% ( 237) 00:08:08.870 6301.538 - 6326.745: 12.5531% ( 294) 00:08:08.870 6326.745 - 6351.951: 14.0330% ( 251) 00:08:08.870 6351.951 - 6377.157: 15.3302% ( 220) 00:08:08.870 6377.157 - 6402.363: 16.5625% ( 209) 00:08:08.870 6402.363 - 6427.569: 17.7064% ( 194) 00:08:08.870 6427.569 - 6452.775: 19.3042% ( 271) 00:08:08.870 6452.775 - 6503.188: 22.2642% ( 502) 00:08:08.870 6503.188 - 6553.600: 24.8467% ( 438) 00:08:08.870 6553.600 - 6604.012: 27.5413% ( 457) 00:08:08.870 6604.012 - 6654.425: 30.6014% ( 519) 00:08:08.870 6654.425 - 6704.837: 33.4847% ( 489) 00:08:08.870 6704.837 - 6755.249: 36.4564% ( 504) 00:08:08.870 6755.249 - 6805.662: 40.0413% ( 608) 00:08:08.870 6805.662 - 6856.074: 44.3396% ( 729) 00:08:08.870 6856.074 - 6906.486: 47.9835% ( 618) 00:08:08.870 6906.486 - 6956.898: 51.3208% ( 566) 00:08:08.870 6956.898 - 7007.311: 54.6993% ( 573) 00:08:08.870 7007.311 - 7057.723: 57.6946% ( 508) 00:08:08.870 7057.723 - 7108.135: 60.1769% ( 421) 00:08:08.870 7108.135 - 7158.548: 62.2347% ( 349) 00:08:08.870 7158.548 - 7208.960: 63.5377% ( 221) 00:08:08.870 7208.960 - 7259.372: 64.4929% ( 162) 00:08:08.870 7259.372 - 7309.785: 65.5778% ( 184) 00:08:08.870 7309.785 - 7360.197: 66.4564% ( 149) 00:08:08.870 7360.197 - 7410.609: 67.4587% ( 170) 00:08:08.870 7410.609 - 7461.022: 68.2606% ( 136) 00:08:08.870 7461.022 - 7511.434: 69.1863% ( 157) 00:08:08.870 7511.434 - 7561.846: 69.7111% ( 89) 00:08:08.870 7561.846 - 7612.258: 70.3007% ( 100) 00:08:08.870 7612.258 - 7662.671: 70.7842% ( 82) 00:08:08.870 7662.671 - 7713.083: 71.6038% ( 139) 00:08:08.870 7713.083 - 7763.495: 72.1521% ( 93) 00:08:08.870 7763.495 - 7813.908: 72.6415% ( 83) 00:08:08.870 7813.908 - 7864.320: 73.2724% ( 107) 00:08:08.870 7864.320 - 7914.732: 73.6969% ( 72) 00:08:08.870 7914.732 - 7965.145: 73.9917% ( 50) 00:08:08.870 7965.145 - 8015.557: 74.3278% ( 57) 00:08:08.870 8015.557 - 8065.969: 74.7052% ( 64) 00:08:08.871 8065.969 - 8116.382: 75.0708% ( 62) 00:08:08.871 8116.382 - 8166.794: 75.6132% ( 92) 00:08:08.871 8166.794 - 8217.206: 76.3561% ( 126) 00:08:08.871 8217.206 - 8267.618: 77.2759% ( 156) 00:08:08.871 8267.618 - 8318.031: 78.2075% ( 158) 00:08:08.871 8318.031 - 8368.443: 79.3337% ( 191) 00:08:08.871 8368.443 - 8418.855: 80.5837% ( 212) 00:08:08.871 8418.855 - 8469.268: 82.1757% ( 270) 00:08:08.871 8469.268 - 8519.680: 83.5554% ( 234) 00:08:08.871 8519.680 - 8570.092: 85.0177% ( 248) 00:08:08.871 8570.092 - 8620.505: 85.9788% ( 163) 00:08:08.871 8620.505 - 8670.917: 86.4623% ( 82) 00:08:08.871 8670.917 - 8721.329: 86.9634% ( 85) 00:08:08.871 8721.329 - 8771.742: 87.4882% ( 89) 00:08:08.871 8771.742 - 8822.154: 88.0130% ( 89) 00:08:08.871 8822.154 - 8872.566: 88.4847% ( 80) 00:08:08.871 8872.566 - 8922.978: 89.0035% ( 88) 00:08:08.871 8922.978 - 8973.391: 89.3396% ( 57) 00:08:08.871 8973.391 - 9023.803: 89.5814% ( 41) 00:08:08.871 9023.803 - 9074.215: 89.7524% ( 29) 00:08:08.871 9074.215 - 9124.628: 89.8880% ( 23) 00:08:08.871 9124.628 - 9175.040: 90.0708% ( 31) 00:08:08.871 9175.040 - 9225.452: 90.2064% ( 23) 00:08:08.871 9225.452 - 9275.865: 90.3833% ( 30) 00:08:08.871 9275.865 - 9326.277: 90.6250% ( 41) 00:08:08.871 9326.277 - 9376.689: 91.0200% ( 67) 00:08:08.871 9376.689 - 9427.102: 91.4151% ( 67) 00:08:08.871 9427.102 - 9477.514: 91.6863% ( 46) 00:08:08.871 9477.514 - 9527.926: 91.8809% ( 33) 00:08:08.871 9527.926 - 9578.338: 92.0814% ( 34) 00:08:08.871 9578.338 - 9628.751: 92.2936% ( 36) 00:08:08.871 9628.751 - 9679.163: 92.4469% ( 26) 00:08:08.871 9679.163 - 9729.575: 92.6651% ( 37) 00:08:08.871 9729.575 - 9779.988: 92.8007% ( 23) 00:08:08.871 9779.988 - 9830.400: 92.8774% ( 13) 00:08:08.871 9830.400 - 9880.812: 92.9717% ( 16) 00:08:08.871 9880.812 - 9931.225: 93.0837% ( 19) 00:08:08.871 9931.225 - 9981.637: 93.1840% ( 17) 00:08:08.871 9981.637 - 10032.049: 93.3550% ( 29) 00:08:08.871 10032.049 - 10082.462: 93.5259% ( 29) 00:08:08.871 10082.462 - 10132.874: 93.8974% ( 63) 00:08:08.871 10132.874 - 10183.286: 94.3691% ( 80) 00:08:08.871 10183.286 - 10233.698: 94.6757% ( 52) 00:08:08.871 10233.698 - 10284.111: 94.8585% ( 31) 00:08:08.871 10284.111 - 10334.523: 95.0531% ( 33) 00:08:08.871 10334.523 - 10384.935: 95.1651% ( 19) 00:08:08.871 10384.935 - 10435.348: 95.4245% ( 44) 00:08:08.871 10435.348 - 10485.760: 95.5071% ( 14) 00:08:08.871 10485.760 - 10536.172: 95.5778% ( 12) 00:08:08.871 10536.172 - 10586.585: 95.6486% ( 12) 00:08:08.871 10586.585 - 10636.997: 95.6958% ( 8) 00:08:08.871 10636.997 - 10687.409: 95.7488% ( 9) 00:08:08.871 10687.409 - 10737.822: 95.7901% ( 7) 00:08:08.871 10737.822 - 10788.234: 95.8373% ( 8) 00:08:08.871 10788.234 - 10838.646: 95.9139% ( 13) 00:08:08.871 10838.646 - 10889.058: 96.0024% ( 15) 00:08:08.871 10889.058 - 10939.471: 96.2028% ( 34) 00:08:08.871 10939.471 - 10989.883: 96.2677% ( 11) 00:08:08.871 10989.883 - 11040.295: 96.3090% ( 7) 00:08:08.871 11040.295 - 11090.708: 96.3502% ( 7) 00:08:08.871 11090.708 - 11141.120: 96.3974% ( 8) 00:08:08.871 11141.120 - 11191.532: 96.4682% ( 12) 00:08:08.871 11191.532 - 11241.945: 96.5507% ( 14) 00:08:08.871 11241.945 - 11292.357: 96.6333% ( 14) 00:08:08.871 11292.357 - 11342.769: 96.7335% ( 17) 00:08:08.871 11342.769 - 11393.182: 96.8396% ( 18) 00:08:08.871 11393.182 - 11443.594: 97.0342% ( 33) 00:08:08.871 11443.594 - 11494.006: 97.2170% ( 31) 00:08:08.871 11494.006 - 11544.418: 97.4057% ( 32) 00:08:08.871 11544.418 - 11594.831: 97.5472% ( 24) 00:08:08.871 11594.831 - 11645.243: 97.6533% ( 18) 00:08:08.871 11645.243 - 11695.655: 97.7535% ( 17) 00:08:08.871 11695.655 - 11746.068: 97.8479% ( 16) 00:08:08.871 11746.068 - 11796.480: 97.9009% ( 9) 00:08:08.871 11796.480 - 11846.892: 97.9540% ( 9) 00:08:08.871 11846.892 - 11897.305: 98.0425% ( 15) 00:08:08.871 11897.305 - 11947.717: 98.1840% ( 24) 00:08:08.871 11947.717 - 11998.129: 98.2665% ( 14) 00:08:08.871 11998.129 - 12048.542: 98.3137% ( 8) 00:08:08.871 12048.542 - 12098.954: 98.3667% ( 9) 00:08:08.871 12098.954 - 12149.366: 98.3903% ( 4) 00:08:08.871 12149.366 - 12199.778: 98.4257% ( 6) 00:08:08.871 12199.778 - 12250.191: 98.4552% ( 5) 00:08:08.871 12250.191 - 12300.603: 98.4906% ( 6) 00:08:08.871 12552.665 - 12603.077: 98.5083% ( 3) 00:08:08.871 12603.077 - 12653.489: 98.5318% ( 4) 00:08:08.871 12653.489 - 12703.902: 98.5672% ( 6) 00:08:08.871 12703.902 - 12754.314: 98.6144% ( 8) 00:08:08.871 12754.314 - 12804.726: 98.6675% ( 9) 00:08:08.871 12804.726 - 12855.138: 98.9210% ( 43) 00:08:08.871 12855.138 - 12905.551: 99.0035% ( 14) 00:08:08.871 12905.551 - 13006.375: 99.0625% ( 10) 00:08:08.871 13006.375 - 13107.200: 99.1215% ( 10) 00:08:08.871 13107.200 - 13208.025: 99.1450% ( 4) 00:08:08.871 13208.025 - 13308.849: 99.1686% ( 4) 00:08:08.871 13308.849 - 13409.674: 99.1863% ( 3) 00:08:08.871 13409.674 - 13510.498: 99.2040% ( 3) 00:08:08.871 13510.498 - 13611.323: 99.2217% ( 3) 00:08:08.871 13611.323 - 13712.148: 99.2276% ( 1) 00:08:08.871 13712.148 - 13812.972: 99.2453% ( 3) 00:08:08.871 22887.188 - 22988.012: 99.2689% ( 4) 00:08:08.871 22988.012 - 23088.837: 99.2925% ( 4) 00:08:08.871 23088.837 - 23189.662: 99.3160% ( 4) 00:08:08.871 23189.662 - 23290.486: 99.3455% ( 5) 00:08:08.871 23290.486 - 23391.311: 99.3632% ( 3) 00:08:08.871 23391.311 - 23492.135: 99.3927% ( 5) 00:08:08.871 23492.135 - 23592.960: 99.4163% ( 4) 00:08:08.871 23592.960 - 23693.785: 99.4399% ( 4) 00:08:08.871 23693.785 - 23794.609: 99.4634% ( 4) 00:08:08.871 23794.609 - 23895.434: 99.4870% ( 4) 00:08:08.871 23895.434 - 23996.258: 99.5165% ( 5) 00:08:08.871 23996.258 - 24097.083: 99.5401% ( 4) 00:08:08.871 24097.083 - 24197.908: 99.5637% ( 4) 00:08:08.871 24197.908 - 24298.732: 99.5873% ( 4) 00:08:08.871 24298.732 - 24399.557: 99.6167% ( 5) 00:08:08.871 24399.557 - 24500.382: 99.6226% ( 1) 00:08:08.871 27424.295 - 27625.945: 99.6580% ( 6) 00:08:08.871 27625.945 - 27827.594: 99.7111% ( 9) 00:08:08.871 27827.594 - 28029.243: 99.7583% ( 8) 00:08:08.871 28029.243 - 28230.892: 99.8054% ( 8) 00:08:08.871 28230.892 - 28432.542: 99.8585% ( 9) 00:08:08.871 28432.542 - 28634.191: 99.9057% ( 8) 00:08:08.871 28634.191 - 28835.840: 99.9528% ( 8) 00:08:08.871 28835.840 - 29037.489: 100.0000% ( 8) 00:08:08.871 00:08:08.871 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:08:08.871 ============================================================================== 00:08:08.871 Range in us Cumulative IO count 00:08:08.871 5520.148 - 5545.354: 0.0059% ( 1) 00:08:08.871 5620.972 - 5646.178: 0.0177% ( 2) 00:08:08.871 5696.591 - 5721.797: 0.0295% ( 2) 00:08:08.871 5721.797 - 5747.003: 0.0354% ( 1) 00:08:08.871 5747.003 - 5772.209: 0.0531% ( 3) 00:08:08.871 5772.209 - 5797.415: 0.1002% ( 8) 00:08:08.871 5797.415 - 5822.622: 0.1474% ( 8) 00:08:08.871 5822.622 - 5847.828: 0.2476% ( 17) 00:08:08.871 5847.828 - 5873.034: 0.3656% ( 20) 00:08:08.871 5873.034 - 5898.240: 0.4953% ( 22) 00:08:08.871 5898.240 - 5923.446: 0.7901% ( 50) 00:08:08.871 5923.446 - 5948.652: 1.1851% ( 67) 00:08:08.871 5948.652 - 5973.858: 1.3679% ( 31) 00:08:08.871 5973.858 - 5999.065: 1.5625% ( 33) 00:08:08.871 5999.065 - 6024.271: 1.8219% ( 44) 00:08:08.871 6024.271 - 6049.477: 2.3054% ( 82) 00:08:08.871 6049.477 - 6074.683: 2.8892% ( 99) 00:08:08.871 6074.683 - 6099.889: 3.4139% ( 89) 00:08:08.871 6099.889 - 6125.095: 3.8443% ( 73) 00:08:08.871 6125.095 - 6150.302: 4.4104% ( 96) 00:08:08.871 6150.302 - 6175.508: 5.2241% ( 138) 00:08:08.871 6175.508 - 6200.714: 6.0849% ( 146) 00:08:08.871 6200.714 - 6225.920: 7.2170% ( 192) 00:08:08.871 6225.920 - 6251.126: 8.4788% ( 214) 00:08:08.871 6251.126 - 6276.332: 9.6757% ( 203) 00:08:08.871 6276.332 - 6301.538: 11.1616% ( 252) 00:08:08.871 6301.538 - 6326.745: 12.7771% ( 274) 00:08:08.871 6326.745 - 6351.951: 14.4988% ( 292) 00:08:08.871 6351.951 - 6377.157: 15.6958% ( 203) 00:08:08.871 6377.157 - 6402.363: 16.8219% ( 191) 00:08:08.871 6402.363 - 6427.569: 18.1840% ( 231) 00:08:08.871 6427.569 - 6452.775: 19.2866% ( 187) 00:08:08.871 6452.775 - 6503.188: 21.9811% ( 457) 00:08:08.871 6503.188 - 6553.600: 24.4811% ( 424) 00:08:08.871 6553.600 - 6604.012: 27.7476% ( 554) 00:08:08.871 6604.012 - 6654.425: 30.3007% ( 433) 00:08:08.872 6654.425 - 6704.837: 33.1663% ( 486) 00:08:08.872 6704.837 - 6755.249: 36.4505% ( 557) 00:08:08.872 6755.249 - 6805.662: 39.4281% ( 505) 00:08:08.872 6805.662 - 6856.074: 43.5024% ( 691) 00:08:08.872 6856.074 - 6906.486: 47.5059% ( 679) 00:08:08.872 6906.486 - 6956.898: 50.9729% ( 588) 00:08:08.872 6956.898 - 7007.311: 54.4693% ( 593) 00:08:08.872 7007.311 - 7057.723: 57.9599% ( 592) 00:08:08.872 7057.723 - 7108.135: 60.4658% ( 425) 00:08:08.872 7108.135 - 7158.548: 61.9929% ( 259) 00:08:08.872 7158.548 - 7208.960: 63.5554% ( 265) 00:08:08.872 7208.960 - 7259.372: 64.9764% ( 241) 00:08:08.872 7259.372 - 7309.785: 65.8785% ( 153) 00:08:08.872 7309.785 - 7360.197: 66.7217% ( 143) 00:08:08.872 7360.197 - 7410.609: 67.3467% ( 106) 00:08:08.872 7410.609 - 7461.022: 68.1132% ( 130) 00:08:08.872 7461.022 - 7511.434: 69.1097% ( 169) 00:08:08.872 7511.434 - 7561.846: 69.8290% ( 122) 00:08:08.872 7561.846 - 7612.258: 70.6250% ( 135) 00:08:08.872 7612.258 - 7662.671: 71.1910% ( 96) 00:08:08.872 7662.671 - 7713.083: 71.7276% ( 91) 00:08:08.872 7713.083 - 7763.495: 72.1108% ( 65) 00:08:08.872 7763.495 - 7813.908: 72.7712% ( 112) 00:08:08.872 7813.908 - 7864.320: 72.9953% ( 38) 00:08:08.872 7864.320 - 7914.732: 73.3019% ( 52) 00:08:08.872 7914.732 - 7965.145: 73.5377% ( 40) 00:08:08.872 7965.145 - 8015.557: 73.8090% ( 46) 00:08:08.872 8015.557 - 8065.969: 74.1097% ( 51) 00:08:08.872 8065.969 - 8116.382: 74.4752% ( 62) 00:08:08.872 8116.382 - 8166.794: 75.1533% ( 115) 00:08:08.872 8166.794 - 8217.206: 76.0790% ( 157) 00:08:08.872 8217.206 - 8267.618: 77.1816% ( 187) 00:08:08.872 8267.618 - 8318.031: 78.1427% ( 163) 00:08:08.872 8318.031 - 8368.443: 79.3337% ( 202) 00:08:08.872 8368.443 - 8418.855: 80.6191% ( 218) 00:08:08.872 8418.855 - 8469.268: 81.9222% ( 221) 00:08:08.872 8469.268 - 8519.680: 83.3726% ( 246) 00:08:08.872 8519.680 - 8570.092: 84.8880% ( 257) 00:08:08.872 8570.092 - 8620.505: 85.7252% ( 142) 00:08:08.872 8620.505 - 8670.917: 86.2854% ( 95) 00:08:08.872 8670.917 - 8721.329: 86.8986% ( 104) 00:08:08.872 8721.329 - 8771.742: 87.4469% ( 93) 00:08:08.872 8771.742 - 8822.154: 87.9540% ( 86) 00:08:08.872 8822.154 - 8872.566: 88.4493% ( 84) 00:08:08.872 8872.566 - 8922.978: 89.0389% ( 100) 00:08:08.872 8922.978 - 8973.391: 89.4045% ( 62) 00:08:08.872 8973.391 - 9023.803: 89.7524% ( 59) 00:08:08.872 9023.803 - 9074.215: 89.9351% ( 31) 00:08:08.872 9074.215 - 9124.628: 90.0649% ( 22) 00:08:08.872 9124.628 - 9175.040: 90.1769% ( 19) 00:08:08.872 9175.040 - 9225.452: 90.2712% ( 16) 00:08:08.872 9225.452 - 9275.865: 90.3950% ( 21) 00:08:08.872 9275.865 - 9326.277: 90.5425% ( 25) 00:08:08.872 9326.277 - 9376.689: 90.6958% ( 26) 00:08:08.872 9376.689 - 9427.102: 90.9198% ( 38) 00:08:08.872 9427.102 - 9477.514: 91.0318% ( 19) 00:08:08.872 9477.514 - 9527.926: 91.1910% ( 27) 00:08:08.872 9527.926 - 9578.338: 91.3502% ( 27) 00:08:08.872 9578.338 - 9628.751: 91.5330% ( 31) 00:08:08.872 9628.751 - 9679.163: 91.6981% ( 28) 00:08:08.872 9679.163 - 9729.575: 91.9340% ( 40) 00:08:08.872 9729.575 - 9779.988: 92.1875% ( 43) 00:08:08.872 9779.988 - 9830.400: 92.5943% ( 69) 00:08:08.872 9830.400 - 9880.812: 93.0837% ( 83) 00:08:08.872 9880.812 - 9931.225: 93.3373% ( 43) 00:08:08.872 9931.225 - 9981.637: 93.6321% ( 50) 00:08:08.872 9981.637 - 10032.049: 93.9564% ( 55) 00:08:08.872 10032.049 - 10082.462: 94.1097% ( 26) 00:08:08.872 10082.462 - 10132.874: 94.2335% ( 21) 00:08:08.872 10132.874 - 10183.286: 94.4575% ( 38) 00:08:08.872 10183.286 - 10233.698: 94.6344% ( 30) 00:08:08.872 10233.698 - 10284.111: 94.7524% ( 20) 00:08:08.872 10284.111 - 10334.523: 94.9057% ( 26) 00:08:08.872 10334.523 - 10384.935: 95.0943% ( 32) 00:08:08.872 10384.935 - 10435.348: 95.1887% ( 16) 00:08:08.872 10435.348 - 10485.760: 95.2948% ( 18) 00:08:08.872 10485.760 - 10536.172: 95.5071% ( 36) 00:08:08.872 10536.172 - 10586.585: 95.6309% ( 21) 00:08:08.872 10586.585 - 10636.997: 95.7370% ( 18) 00:08:08.872 10636.997 - 10687.409: 95.8432% ( 18) 00:08:08.872 10687.409 - 10737.822: 95.9316% ( 15) 00:08:08.872 10737.822 - 10788.234: 96.0142% ( 14) 00:08:08.872 10788.234 - 10838.646: 96.0731% ( 10) 00:08:08.872 10838.646 - 10889.058: 96.1380% ( 11) 00:08:08.872 10889.058 - 10939.471: 96.2441% ( 18) 00:08:08.872 10939.471 - 10989.883: 96.3797% ( 23) 00:08:08.872 10989.883 - 11040.295: 96.4917% ( 19) 00:08:08.872 11040.295 - 11090.708: 96.5625% ( 12) 00:08:08.872 11090.708 - 11141.120: 96.6333% ( 12) 00:08:08.872 11141.120 - 11191.532: 96.6863% ( 9) 00:08:08.872 11191.532 - 11241.945: 96.7571% ( 12) 00:08:08.872 11241.945 - 11292.357: 96.8573% ( 17) 00:08:08.872 11292.357 - 11342.769: 96.9752% ( 20) 00:08:08.872 11342.769 - 11393.182: 97.1875% ( 36) 00:08:08.872 11393.182 - 11443.594: 97.3526% ( 28) 00:08:08.872 11443.594 - 11494.006: 97.4646% ( 19) 00:08:08.872 11494.006 - 11544.418: 97.5531% ( 15) 00:08:08.872 11544.418 - 11594.831: 97.6238% ( 12) 00:08:08.872 11594.831 - 11645.243: 97.6887% ( 11) 00:08:08.872 11645.243 - 11695.655: 97.7653% ( 13) 00:08:08.872 11695.655 - 11746.068: 97.8361% ( 12) 00:08:08.872 11746.068 - 11796.480: 97.8950% ( 10) 00:08:08.872 11796.480 - 11846.892: 97.9717% ( 13) 00:08:08.872 11846.892 - 11897.305: 98.0955% ( 21) 00:08:08.872 11897.305 - 11947.717: 98.1781% ( 14) 00:08:08.872 11947.717 - 11998.129: 98.2724% ( 16) 00:08:08.872 11998.129 - 12048.542: 98.3255% ( 9) 00:08:08.872 12048.542 - 12098.954: 98.4139% ( 15) 00:08:08.872 12098.954 - 12149.366: 98.4434% ( 5) 00:08:08.872 12149.366 - 12199.778: 98.4611% ( 3) 00:08:08.872 12199.778 - 12250.191: 98.4847% ( 4) 00:08:08.872 12250.191 - 12300.603: 98.5024% ( 3) 00:08:08.872 12300.603 - 12351.015: 98.5200% ( 3) 00:08:08.872 12351.015 - 12401.428: 98.5495% ( 5) 00:08:08.872 12401.428 - 12451.840: 98.5967% ( 8) 00:08:08.872 12451.840 - 12502.252: 98.6910% ( 16) 00:08:08.872 12502.252 - 12552.665: 98.7795% ( 15) 00:08:08.872 12552.665 - 12603.077: 98.8384% ( 10) 00:08:08.872 12603.077 - 12653.489: 98.9151% ( 13) 00:08:08.872 12653.489 - 12703.902: 99.1156% ( 34) 00:08:08.872 12703.902 - 12754.314: 99.1509% ( 6) 00:08:08.872 12754.314 - 12804.726: 99.1745% ( 4) 00:08:08.872 12804.726 - 12855.138: 99.1922% ( 3) 00:08:08.872 12855.138 - 12905.551: 99.2099% ( 3) 00:08:08.872 12905.551 - 13006.375: 99.2453% ( 6) 00:08:08.872 21173.169 - 21273.994: 99.2512% ( 1) 00:08:08.872 21273.994 - 21374.818: 99.2748% ( 4) 00:08:08.872 21374.818 - 21475.643: 99.2983% ( 4) 00:08:08.872 21475.643 - 21576.468: 99.3219% ( 4) 00:08:08.872 21576.468 - 21677.292: 99.3455% ( 4) 00:08:08.872 21677.292 - 21778.117: 99.3691% ( 4) 00:08:08.872 21778.117 - 21878.942: 99.3986% ( 5) 00:08:08.872 21878.942 - 21979.766: 99.4222% ( 4) 00:08:08.872 21979.766 - 22080.591: 99.4458% ( 4) 00:08:08.872 22080.591 - 22181.415: 99.4634% ( 3) 00:08:08.872 22181.415 - 22282.240: 99.4929% ( 5) 00:08:08.872 22282.240 - 22383.065: 99.5165% ( 4) 00:08:08.872 22383.065 - 22483.889: 99.5401% ( 4) 00:08:08.872 22483.889 - 22584.714: 99.5637% ( 4) 00:08:08.872 22584.714 - 22685.538: 99.5873% ( 4) 00:08:08.872 22685.538 - 22786.363: 99.6108% ( 4) 00:08:08.872 22786.363 - 22887.188: 99.6226% ( 2) 00:08:08.872 25609.452 - 25710.277: 99.6344% ( 2) 00:08:08.872 25710.277 - 25811.102: 99.6580% ( 4) 00:08:08.872 25811.102 - 26012.751: 99.7052% ( 8) 00:08:08.872 26012.751 - 26214.400: 99.7524% ( 8) 00:08:08.872 26214.400 - 26416.049: 99.7936% ( 7) 00:08:08.872 26416.049 - 26617.698: 99.8408% ( 8) 00:08:08.872 26617.698 - 26819.348: 99.8939% ( 9) 00:08:08.872 26819.348 - 27020.997: 99.9410% ( 8) 00:08:08.872 27020.997 - 27222.646: 99.9941% ( 9) 00:08:08.872 27222.646 - 27424.295: 100.0000% ( 1) 00:08:08.872 00:08:08.872 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:08:08.872 ============================================================================== 00:08:08.872 Range in us Cumulative IO count 00:08:08.872 5520.148 - 5545.354: 0.0059% ( 1) 00:08:08.872 5595.766 - 5620.972: 0.0117% ( 1) 00:08:08.872 5620.972 - 5646.178: 0.0176% ( 1) 00:08:08.872 5646.178 - 5671.385: 0.0294% ( 2) 00:08:08.872 5671.385 - 5696.591: 0.0352% ( 1) 00:08:08.872 5696.591 - 5721.797: 0.0470% ( 2) 00:08:08.872 5721.797 - 5747.003: 0.0646% ( 3) 00:08:08.872 5747.003 - 5772.209: 0.0940% ( 5) 00:08:08.872 5772.209 - 5797.415: 0.1292% ( 6) 00:08:08.872 5797.415 - 5822.622: 0.1762% ( 8) 00:08:08.872 5822.622 - 5847.828: 0.2232% ( 8) 00:08:08.872 5847.828 - 5873.034: 0.2643% ( 7) 00:08:08.872 5873.034 - 5898.240: 0.4171% ( 26) 00:08:08.872 5898.240 - 5923.446: 0.6050% ( 32) 00:08:08.872 5923.446 - 5948.652: 0.7812% ( 30) 00:08:08.872 5948.652 - 5973.858: 0.9692% ( 32) 00:08:08.872 5973.858 - 5999.065: 1.2688% ( 51) 00:08:08.872 5999.065 - 6024.271: 1.4098% ( 24) 00:08:08.872 6024.271 - 6049.477: 1.6271% ( 37) 00:08:08.872 6049.477 - 6074.683: 2.4377% ( 138) 00:08:08.872 6074.683 - 6099.889: 2.7902% ( 60) 00:08:08.872 6099.889 - 6125.095: 3.3541% ( 96) 00:08:08.872 6125.095 - 6150.302: 4.0590% ( 120) 00:08:08.872 6150.302 - 6175.508: 4.6346% ( 98) 00:08:08.872 6175.508 - 6200.714: 5.6508% ( 173) 00:08:08.872 6200.714 - 6225.920: 7.2956% ( 280) 00:08:08.872 6225.920 - 6251.126: 8.2178% ( 157) 00:08:08.872 6251.126 - 6276.332: 9.3867% ( 199) 00:08:08.872 6276.332 - 6301.538: 11.0785% ( 288) 00:08:08.872 6301.538 - 6326.745: 12.8407% ( 300) 00:08:08.872 6326.745 - 6351.951: 14.0331% ( 203) 00:08:08.872 6351.951 - 6377.157: 15.5369% ( 256) 00:08:08.872 6377.157 - 6402.363: 17.0172% ( 252) 00:08:08.872 6402.363 - 6427.569: 18.2390% ( 208) 00:08:08.872 6427.569 - 6452.775: 19.6664% ( 243) 00:08:08.872 6452.775 - 6503.188: 22.1511% ( 423) 00:08:08.873 6503.188 - 6553.600: 24.7768% ( 447) 00:08:08.873 6553.600 - 6604.012: 27.1793% ( 409) 00:08:08.873 6604.012 - 6654.425: 29.8990% ( 463) 00:08:08.873 6654.425 - 6704.837: 32.7303% ( 482) 00:08:08.873 6704.837 - 6755.249: 35.9786% ( 553) 00:08:08.873 6755.249 - 6805.662: 38.8452% ( 488) 00:08:08.873 6805.662 - 6856.074: 42.5987% ( 639) 00:08:08.873 6856.074 - 6906.486: 46.9161% ( 735) 00:08:08.873 6906.486 - 6956.898: 50.7989% ( 661) 00:08:08.873 6956.898 - 7007.311: 54.7345% ( 670) 00:08:08.873 7007.311 - 7057.723: 58.1826% ( 587) 00:08:08.873 7057.723 - 7108.135: 60.8846% ( 460) 00:08:08.873 7108.135 - 7158.548: 62.3473% ( 249) 00:08:08.873 7158.548 - 7208.960: 63.8393% ( 254) 00:08:08.873 7208.960 - 7259.372: 65.3254% ( 253) 00:08:08.873 7259.372 - 7309.785: 66.3416% ( 173) 00:08:08.873 7309.785 - 7360.197: 66.9760% ( 108) 00:08:08.873 7360.197 - 7410.609: 67.8102% ( 142) 00:08:08.873 7410.609 - 7461.022: 68.6678% ( 146) 00:08:08.873 7461.022 - 7511.434: 69.4196% ( 128) 00:08:08.873 7511.434 - 7561.846: 70.3360% ( 156) 00:08:08.873 7561.846 - 7612.258: 70.9234% ( 100) 00:08:08.873 7612.258 - 7662.671: 71.3933% ( 80) 00:08:08.873 7662.671 - 7713.083: 71.9514% ( 95) 00:08:08.873 7713.083 - 7763.495: 72.3860% ( 74) 00:08:08.873 7763.495 - 7813.908: 72.9441% ( 95) 00:08:08.873 7813.908 - 7864.320: 73.2906% ( 59) 00:08:08.873 7864.320 - 7914.732: 73.6783% ( 66) 00:08:08.873 7914.732 - 7965.145: 74.1130% ( 74) 00:08:08.873 7965.145 - 8015.557: 74.3127% ( 34) 00:08:08.873 8015.557 - 8065.969: 74.6652% ( 60) 00:08:08.873 8065.969 - 8116.382: 75.1116% ( 76) 00:08:08.873 8116.382 - 8166.794: 75.5052% ( 67) 00:08:08.873 8166.794 - 8217.206: 76.0338% ( 90) 00:08:08.873 8217.206 - 8267.618: 76.8151% ( 133) 00:08:08.873 8267.618 - 8318.031: 77.8372% ( 174) 00:08:08.873 8318.031 - 8368.443: 78.9121% ( 183) 00:08:08.873 8368.443 - 8418.855: 80.2044% ( 220) 00:08:08.873 8418.855 - 8469.268: 81.5496% ( 229) 00:08:08.873 8469.268 - 8519.680: 82.7185% ( 199) 00:08:08.873 8519.680 - 8570.092: 84.2869% ( 267) 00:08:08.873 8570.092 - 8620.505: 85.2032% ( 156) 00:08:08.873 8620.505 - 8670.917: 86.0080% ( 137) 00:08:08.873 8670.917 - 8721.329: 86.6541% ( 110) 00:08:08.873 8721.329 - 8771.742: 87.2944% ( 109) 00:08:08.873 8771.742 - 8822.154: 87.9523% ( 112) 00:08:08.873 8822.154 - 8872.566: 88.4164% ( 79) 00:08:08.873 8872.566 - 8922.978: 88.8158% ( 68) 00:08:08.873 8922.978 - 8973.391: 89.1624% ( 59) 00:08:08.873 8973.391 - 9023.803: 89.3797% ( 37) 00:08:08.873 9023.803 - 9074.215: 89.6558% ( 47) 00:08:08.873 9074.215 - 9124.628: 89.8555% ( 34) 00:08:08.873 9124.628 - 9175.040: 90.0023% ( 25) 00:08:08.873 9175.040 - 9225.452: 90.1786% ( 30) 00:08:08.873 9225.452 - 9275.865: 90.3430% ( 28) 00:08:08.873 9275.865 - 9326.277: 90.7307% ( 66) 00:08:08.873 9326.277 - 9376.689: 90.9363% ( 35) 00:08:08.873 9376.689 - 9427.102: 91.1948% ( 44) 00:08:08.873 9427.102 - 9477.514: 91.4767% ( 48) 00:08:08.873 9477.514 - 9527.926: 91.6236% ( 25) 00:08:08.873 9527.926 - 9578.338: 91.7411% ( 20) 00:08:08.873 9578.338 - 9628.751: 91.8644% ( 21) 00:08:08.873 9628.751 - 9679.163: 91.9819% ( 20) 00:08:08.873 9679.163 - 9729.575: 92.1405% ( 27) 00:08:08.873 9729.575 - 9779.988: 92.3344% ( 33) 00:08:08.873 9779.988 - 9830.400: 92.5693% ( 40) 00:08:08.873 9830.400 - 9880.812: 92.8571% ( 49) 00:08:08.873 9880.812 - 9931.225: 93.0980% ( 41) 00:08:08.873 9931.225 - 9981.637: 93.2742% ( 30) 00:08:08.873 9981.637 - 10032.049: 93.4622% ( 32) 00:08:08.873 10032.049 - 10082.462: 93.6678% ( 35) 00:08:08.873 10082.462 - 10132.874: 93.8381% ( 29) 00:08:08.873 10132.874 - 10183.286: 94.0613% ( 38) 00:08:08.873 10183.286 - 10233.698: 94.3668% ( 52) 00:08:08.873 10233.698 - 10284.111: 94.6076% ( 41) 00:08:08.873 10284.111 - 10334.523: 94.9718% ( 62) 00:08:08.873 10334.523 - 10384.935: 95.1833% ( 36) 00:08:08.873 10384.935 - 10435.348: 95.3654% ( 31) 00:08:08.873 10435.348 - 10485.760: 95.6767% ( 53) 00:08:08.873 10485.760 - 10536.172: 95.8412% ( 28) 00:08:08.873 10536.172 - 10586.585: 95.9763% ( 23) 00:08:08.873 10586.585 - 10636.997: 96.0761% ( 17) 00:08:08.873 10636.997 - 10687.409: 96.2230% ( 25) 00:08:08.873 10687.409 - 10737.822: 96.3522% ( 22) 00:08:08.873 10737.822 - 10788.234: 96.4932% ( 24) 00:08:08.873 10788.234 - 10838.646: 96.5754% ( 14) 00:08:08.873 10838.646 - 10889.058: 96.6812% ( 18) 00:08:08.873 10889.058 - 10939.471: 96.7751% ( 16) 00:08:08.873 10939.471 - 10989.883: 96.8456% ( 12) 00:08:08.873 10989.883 - 11040.295: 96.9631% ( 20) 00:08:08.873 11040.295 - 11090.708: 97.0395% ( 13) 00:08:08.873 11090.708 - 11141.120: 97.1511% ( 19) 00:08:08.873 11141.120 - 11191.532: 97.2803% ( 22) 00:08:08.873 11191.532 - 11241.945: 97.3625% ( 14) 00:08:08.873 11241.945 - 11292.357: 97.4095% ( 8) 00:08:08.873 11292.357 - 11342.769: 97.4389% ( 5) 00:08:08.873 11342.769 - 11393.182: 97.4742% ( 6) 00:08:08.873 11393.182 - 11443.594: 97.5035% ( 5) 00:08:08.873 11443.594 - 11494.006: 97.5329% ( 5) 00:08:08.873 11494.006 - 11544.418: 97.5623% ( 5) 00:08:08.873 11544.418 - 11594.831: 97.5858% ( 4) 00:08:08.873 11594.831 - 11645.243: 97.6269% ( 7) 00:08:08.873 11645.243 - 11695.655: 97.6621% ( 6) 00:08:08.873 11695.655 - 11746.068: 97.7091% ( 8) 00:08:08.873 11746.068 - 11796.480: 97.7679% ( 10) 00:08:08.873 11796.480 - 11846.892: 97.8148% ( 8) 00:08:08.873 11846.892 - 11897.305: 97.9382% ( 21) 00:08:08.873 11897.305 - 11947.717: 98.0851% ( 25) 00:08:08.873 11947.717 - 11998.129: 98.1497% ( 11) 00:08:08.873 11998.129 - 12048.542: 98.1732% ( 4) 00:08:08.873 12048.542 - 12098.954: 98.2025% ( 5) 00:08:08.873 12098.954 - 12149.366: 98.2202% ( 3) 00:08:08.873 12149.366 - 12199.778: 98.2319% ( 2) 00:08:08.873 12199.778 - 12250.191: 98.2495% ( 3) 00:08:08.873 12250.191 - 12300.603: 98.3024% ( 9) 00:08:08.873 12300.603 - 12351.015: 98.4610% ( 27) 00:08:08.873 12351.015 - 12401.428: 98.5197% ( 10) 00:08:08.873 12401.428 - 12451.840: 98.5550% ( 6) 00:08:08.873 12451.840 - 12502.252: 98.5726% ( 3) 00:08:08.873 12502.252 - 12552.665: 98.5961% ( 4) 00:08:08.873 12552.665 - 12603.077: 98.6372% ( 7) 00:08:08.873 12603.077 - 12653.489: 98.8369% ( 34) 00:08:08.873 12653.489 - 12703.902: 98.9074% ( 12) 00:08:08.873 12703.902 - 12754.314: 98.9309% ( 4) 00:08:08.873 12754.314 - 12804.726: 98.9662% ( 6) 00:08:08.873 12804.726 - 12855.138: 99.0014% ( 6) 00:08:08.873 12855.138 - 12905.551: 99.0308% ( 5) 00:08:08.873 12905.551 - 13006.375: 99.1071% ( 13) 00:08:08.873 13006.375 - 13107.200: 99.1718% ( 11) 00:08:08.873 13107.200 - 13208.025: 99.2188% ( 8) 00:08:08.873 13208.025 - 13308.849: 99.2481% ( 5) 00:08:08.873 15930.289 - 16031.114: 99.2599% ( 2) 00:08:08.873 16031.114 - 16131.938: 99.2834% ( 4) 00:08:08.873 16131.938 - 16232.763: 99.3069% ( 4) 00:08:08.873 16232.763 - 16333.588: 99.3304% ( 4) 00:08:08.873 16333.588 - 16434.412: 99.3597% ( 5) 00:08:08.873 16434.412 - 16535.237: 99.3832% ( 4) 00:08:08.873 16535.237 - 16636.062: 99.4008% ( 3) 00:08:08.873 16636.062 - 16736.886: 99.4302% ( 5) 00:08:08.873 16736.886 - 16837.711: 99.4478% ( 3) 00:08:08.873 16837.711 - 16938.535: 99.4772% ( 5) 00:08:08.873 16938.535 - 17039.360: 99.5007% ( 4) 00:08:08.873 17039.360 - 17140.185: 99.5242% ( 4) 00:08:08.873 17140.185 - 17241.009: 99.5477% ( 4) 00:08:08.873 17241.009 - 17341.834: 99.5712% ( 4) 00:08:08.873 17341.834 - 17442.658: 99.5947% ( 4) 00:08:08.873 17442.658 - 17543.483: 99.6182% ( 4) 00:08:08.873 17543.483 - 17644.308: 99.6241% ( 1) 00:08:08.873 20568.222 - 20669.046: 99.6417% ( 3) 00:08:08.873 20669.046 - 20769.871: 99.6652% ( 4) 00:08:08.873 20769.871 - 20870.695: 99.6945% ( 5) 00:08:08.873 20870.695 - 20971.520: 99.7180% ( 4) 00:08:08.873 20971.520 - 21072.345: 99.7415% ( 4) 00:08:08.873 21072.345 - 21173.169: 99.7650% ( 4) 00:08:08.873 21173.169 - 21273.994: 99.7944% ( 5) 00:08:08.873 21273.994 - 21374.818: 99.8179% ( 4) 00:08:08.873 21374.818 - 21475.643: 99.8414% ( 4) 00:08:08.873 21475.643 - 21576.468: 99.8649% ( 4) 00:08:08.873 21576.468 - 21677.292: 99.8884% ( 4) 00:08:08.873 21677.292 - 21778.117: 99.9119% ( 4) 00:08:08.873 21778.117 - 21878.942: 99.9354% ( 4) 00:08:08.873 21878.942 - 21979.766: 99.9589% ( 4) 00:08:08.873 21979.766 - 22080.591: 99.9883% ( 5) 00:08:08.873 22080.591 - 22181.415: 100.0000% ( 2) 00:08:08.873 00:08:08.873 ************************************ 00:08:08.873 END TEST nvme_perf 00:08:08.873 ************************************ 00:08:08.873 03:35:01 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:08:08.873 00:08:08.873 real 0m2.471s 00:08:08.873 user 0m2.177s 00:08:08.873 sys 0m0.194s 00:08:08.873 03:35:01 nvme.nvme_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.873 03:35:01 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:08:08.873 03:35:01 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:08.873 03:35:01 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:08.873 03:35:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.873 03:35:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.873 ************************************ 00:08:08.873 START TEST nvme_hello_world 00:08:08.873 ************************************ 00:08:08.873 03:35:01 nvme.nvme_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:08:08.873 Initializing NVMe Controllers 00:08:08.873 Attached to 0000:00:10.0 00:08:08.873 Namespace ID: 1 size: 6GB 00:08:08.873 Attached to 0000:00:11.0 00:08:08.873 Namespace ID: 1 size: 5GB 00:08:08.873 Attached to 0000:00:13.0 00:08:08.873 Namespace ID: 1 size: 1GB 00:08:08.873 Attached to 0000:00:12.0 00:08:08.873 Namespace ID: 1 size: 4GB 00:08:08.873 Namespace ID: 2 size: 4GB 00:08:08.873 Namespace ID: 3 size: 4GB 00:08:08.874 Initialization complete. 00:08:08.874 INFO: using host memory buffer for IO 00:08:08.874 Hello world! 00:08:08.874 INFO: using host memory buffer for IO 00:08:08.874 Hello world! 00:08:08.874 INFO: using host memory buffer for IO 00:08:08.874 Hello world! 00:08:08.874 INFO: using host memory buffer for IO 00:08:08.874 Hello world! 00:08:08.874 INFO: using host memory buffer for IO 00:08:08.874 Hello world! 00:08:08.874 INFO: using host memory buffer for IO 00:08:08.874 Hello world! 00:08:08.874 ************************************ 00:08:08.874 END TEST nvme_hello_world 00:08:08.874 ************************************ 00:08:08.874 00:08:08.874 real 0m0.217s 00:08:08.874 user 0m0.068s 00:08:08.874 sys 0m0.108s 00:08:08.874 03:35:01 nvme.nvme_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:08.874 03:35:01 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:08.874 03:35:01 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:08.874 03:35:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:08.874 03:35:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:08.874 03:35:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:08.874 ************************************ 00:08:08.874 START TEST nvme_sgl 00:08:08.874 ************************************ 00:08:08.874 03:35:01 nvme.nvme_sgl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:08:09.132 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:08:09.132 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:08:09.132 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:08:09.132 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:08:09.132 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:08:09.132 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:08:09.132 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:08:09.132 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:08:09.132 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:08:09.132 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:08:09.132 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:08:09.132 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:08:09.132 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:08:09.132 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:08:09.133 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:08:09.133 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:08:09.133 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:08:09.133 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:08:09.133 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:08:09.133 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:08:09.133 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:08:09.133 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:08:09.133 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:08:09.133 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:08:09.133 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:08:09.133 NVMe Readv/Writev Request test 00:08:09.133 Attached to 0000:00:10.0 00:08:09.133 Attached to 0000:00:11.0 00:08:09.133 Attached to 0000:00:13.0 00:08:09.133 Attached to 0000:00:12.0 00:08:09.133 0000:00:10.0: build_io_request_2 test passed 00:08:09.133 0000:00:10.0: build_io_request_4 test passed 00:08:09.133 0000:00:10.0: build_io_request_5 test passed 00:08:09.133 0000:00:10.0: build_io_request_6 test passed 00:08:09.133 0000:00:10.0: build_io_request_7 test passed 00:08:09.133 0000:00:10.0: build_io_request_10 test passed 00:08:09.133 0000:00:11.0: build_io_request_2 test passed 00:08:09.133 0000:00:11.0: build_io_request_4 test passed 00:08:09.133 0000:00:11.0: build_io_request_5 test passed 00:08:09.133 0000:00:11.0: build_io_request_6 test passed 00:08:09.133 0000:00:11.0: build_io_request_7 test passed 00:08:09.133 0000:00:11.0: build_io_request_10 test passed 00:08:09.133 Cleaning up... 00:08:09.391 ************************************ 00:08:09.391 END TEST nvme_sgl 00:08:09.391 ************************************ 00:08:09.391 00:08:09.391 real 0m0.282s 00:08:09.391 user 0m0.143s 00:08:09.391 sys 0m0.092s 00:08:09.391 03:35:01 nvme.nvme_sgl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.391 03:35:01 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:08:09.391 03:35:01 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:09.391 03:35:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.391 03:35:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.391 03:35:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.391 ************************************ 00:08:09.391 START TEST nvme_e2edp 00:08:09.391 ************************************ 00:08:09.391 03:35:01 nvme.nvme_e2edp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:08:09.391 NVMe Write/Read with End-to-End data protection test 00:08:09.391 Attached to 0000:00:10.0 00:08:09.391 Attached to 0000:00:11.0 00:08:09.391 Attached to 0000:00:13.0 00:08:09.391 Attached to 0000:00:12.0 00:08:09.391 Cleaning up... 00:08:09.391 ************************************ 00:08:09.391 END TEST nvme_e2edp 00:08:09.391 ************************************ 00:08:09.391 00:08:09.391 real 0m0.194s 00:08:09.391 user 0m0.068s 00:08:09.391 sys 0m0.086s 00:08:09.391 03:35:01 nvme.nvme_e2edp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.391 03:35:01 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:08:09.649 03:35:01 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:09.649 03:35:01 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.649 03:35:01 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.649 03:35:01 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.649 ************************************ 00:08:09.649 START TEST nvme_reserve 00:08:09.649 ************************************ 00:08:09.649 03:35:01 nvme.nvme_reserve -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:08:09.649 ===================================================== 00:08:09.649 NVMe Controller at PCI bus 0, device 16, function 0 00:08:09.649 ===================================================== 00:08:09.649 Reservations: Not Supported 00:08:09.649 ===================================================== 00:08:09.649 NVMe Controller at PCI bus 0, device 17, function 0 00:08:09.649 ===================================================== 00:08:09.649 Reservations: Not Supported 00:08:09.649 ===================================================== 00:08:09.649 NVMe Controller at PCI bus 0, device 19, function 0 00:08:09.649 ===================================================== 00:08:09.649 Reservations: Not Supported 00:08:09.649 ===================================================== 00:08:09.649 NVMe Controller at PCI bus 0, device 18, function 0 00:08:09.649 ===================================================== 00:08:09.649 Reservations: Not Supported 00:08:09.649 Reservation test passed 00:08:09.649 ************************************ 00:08:09.649 END TEST nvme_reserve 00:08:09.649 ************************************ 00:08:09.649 00:08:09.649 real 0m0.212s 00:08:09.649 user 0m0.057s 00:08:09.649 sys 0m0.103s 00:08:09.649 03:35:02 nvme.nvme_reserve -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.649 03:35:02 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:08:09.907 03:35:02 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:09.907 03:35:02 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:09.907 03:35:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.907 03:35:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:09.907 ************************************ 00:08:09.907 START TEST nvme_err_injection 00:08:09.907 ************************************ 00:08:09.907 03:35:02 nvme.nvme_err_injection -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:08:09.907 NVMe Error Injection test 00:08:09.907 Attached to 0000:00:10.0 00:08:09.907 Attached to 0000:00:11.0 00:08:09.907 Attached to 0000:00:13.0 00:08:09.907 Attached to 0000:00:12.0 00:08:09.907 0000:00:10.0: get features failed as expected 00:08:09.907 0000:00:11.0: get features failed as expected 00:08:09.907 0000:00:13.0: get features failed as expected 00:08:09.907 0000:00:12.0: get features failed as expected 00:08:09.907 0000:00:10.0: get features successfully as expected 00:08:09.907 0000:00:11.0: get features successfully as expected 00:08:09.907 0000:00:13.0: get features successfully as expected 00:08:09.907 0000:00:12.0: get features successfully as expected 00:08:09.907 0000:00:10.0: read failed as expected 00:08:09.907 0000:00:11.0: read failed as expected 00:08:09.907 0000:00:13.0: read failed as expected 00:08:09.907 0000:00:12.0: read failed as expected 00:08:09.907 0000:00:10.0: read successfully as expected 00:08:09.907 0000:00:11.0: read successfully as expected 00:08:09.907 0000:00:13.0: read successfully as expected 00:08:09.907 0000:00:12.0: read successfully as expected 00:08:09.907 Cleaning up... 00:08:09.907 ************************************ 00:08:09.907 END TEST nvme_err_injection 00:08:09.907 ************************************ 00:08:09.907 00:08:09.907 real 0m0.227s 00:08:09.907 user 0m0.080s 00:08:09.907 sys 0m0.101s 00:08:09.907 03:35:02 nvme.nvme_err_injection -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.907 03:35:02 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:08:10.165 03:35:02 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:10.165 03:35:02 nvme -- common/autotest_common.sh@1101 -- # '[' 9 -le 1 ']' 00:08:10.165 03:35:02 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:10.165 03:35:02 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:10.165 ************************************ 00:08:10.165 START TEST nvme_overhead 00:08:10.165 ************************************ 00:08:10.165 03:35:02 nvme.nvme_overhead -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:08:11.095 Initializing NVMe Controllers 00:08:11.095 Attached to 0000:00:10.0 00:08:11.095 Attached to 0000:00:11.0 00:08:11.095 Attached to 0000:00:13.0 00:08:11.095 Attached to 0000:00:12.0 00:08:11.095 Initialization complete. Launching workers. 00:08:11.095 submit (in ns) avg, min, max = 11489.2, 9855.4, 293868.5 00:08:11.095 complete (in ns) avg, min, max = 7635.2, 7120.0, 238190.8 00:08:11.095 00:08:11.095 Submit histogram 00:08:11.095 ================ 00:08:11.095 Range in us Cumulative Count 00:08:11.095 9.846 - 9.895: 0.0059% ( 1) 00:08:11.095 9.994 - 10.043: 0.0118% ( 1) 00:08:11.095 10.437 - 10.486: 0.0235% ( 2) 00:08:11.095 10.535 - 10.585: 0.0294% ( 1) 00:08:11.095 10.634 - 10.683: 0.0353% ( 1) 00:08:11.095 10.683 - 10.732: 0.2175% ( 31) 00:08:11.095 10.732 - 10.782: 1.3993% ( 201) 00:08:11.095 10.782 - 10.831: 5.6970% ( 731) 00:08:11.095 10.831 - 10.880: 14.5452% ( 1505) 00:08:11.095 10.880 - 10.929: 26.6330% ( 2056) 00:08:11.095 10.929 - 10.978: 39.5497% ( 2197) 00:08:11.095 10.978 - 11.028: 51.0553% ( 1957) 00:08:11.095 11.028 - 11.077: 60.5209% ( 1610) 00:08:11.095 11.077 - 11.126: 67.7641% ( 1232) 00:08:11.095 11.126 - 11.175: 72.3734% ( 784) 00:08:11.095 11.175 - 11.225: 75.1191% ( 467) 00:08:11.095 11.225 - 11.274: 77.0945% ( 336) 00:08:11.095 11.274 - 11.323: 78.4408% ( 229) 00:08:11.095 11.323 - 11.372: 79.4462% ( 171) 00:08:11.095 11.372 - 11.422: 80.5691% ( 191) 00:08:11.095 11.422 - 11.471: 81.6509% ( 184) 00:08:11.095 11.471 - 11.520: 82.7327% ( 184) 00:08:11.095 11.520 - 11.569: 83.8203% ( 185) 00:08:11.095 11.569 - 11.618: 84.6199% ( 136) 00:08:11.095 11.618 - 11.668: 85.6488% ( 175) 00:08:11.095 11.668 - 11.717: 86.6012% ( 162) 00:08:11.095 11.717 - 11.766: 87.5830% ( 167) 00:08:11.095 11.766 - 11.815: 88.4767% ( 152) 00:08:11.095 11.815 - 11.865: 89.3821% ( 154) 00:08:11.095 11.865 - 11.914: 90.2405% ( 146) 00:08:11.095 11.914 - 11.963: 91.1576% ( 156) 00:08:11.095 11.963 - 12.012: 91.9748% ( 139) 00:08:11.095 12.012 - 12.062: 92.7333% ( 129) 00:08:11.095 12.062 - 12.111: 93.3623% ( 107) 00:08:11.095 12.111 - 12.160: 93.8680% ( 86) 00:08:11.095 12.160 - 12.209: 94.2913% ( 72) 00:08:11.095 12.209 - 12.258: 94.6499% ( 61) 00:08:11.095 12.258 - 12.308: 94.9262% ( 47) 00:08:11.095 12.308 - 12.357: 95.0732% ( 25) 00:08:11.095 12.357 - 12.406: 95.1849% ( 19) 00:08:11.095 12.406 - 12.455: 95.2731% ( 15) 00:08:11.095 12.455 - 12.505: 95.3201% ( 8) 00:08:11.095 12.505 - 12.554: 95.3966% ( 13) 00:08:11.095 12.554 - 12.603: 95.4318% ( 6) 00:08:11.095 12.603 - 12.702: 95.4906% ( 10) 00:08:11.095 12.702 - 12.800: 95.5494% ( 10) 00:08:11.095 12.800 - 12.898: 95.6200% ( 12) 00:08:11.095 12.898 - 12.997: 95.7317% ( 19) 00:08:11.095 12.997 - 13.095: 95.8140% ( 14) 00:08:11.095 13.095 - 13.194: 95.9374% ( 21) 00:08:11.095 13.194 - 13.292: 96.0433% ( 18) 00:08:11.095 13.292 - 13.391: 96.1256% ( 14) 00:08:11.095 13.391 - 13.489: 96.1903% ( 11) 00:08:11.095 13.489 - 13.588: 96.2490% ( 10) 00:08:11.095 13.588 - 13.686: 96.3490% ( 17) 00:08:11.095 13.686 - 13.785: 96.4078% ( 10) 00:08:11.095 13.785 - 13.883: 96.4783% ( 12) 00:08:11.095 13.883 - 13.982: 96.5371% ( 10) 00:08:11.095 13.982 - 14.080: 96.5606% ( 4) 00:08:11.095 14.080 - 14.178: 96.6136% ( 9) 00:08:11.095 14.178 - 14.277: 96.6665% ( 9) 00:08:11.095 14.277 - 14.375: 96.7076% ( 7) 00:08:11.095 14.375 - 14.474: 96.7370% ( 5) 00:08:11.095 14.474 - 14.572: 96.7723% ( 6) 00:08:11.095 14.572 - 14.671: 96.8135% ( 7) 00:08:11.095 14.671 - 14.769: 96.8664% ( 9) 00:08:11.095 14.769 - 14.868: 96.8899% ( 4) 00:08:11.095 14.868 - 14.966: 96.9310% ( 7) 00:08:11.095 14.966 - 15.065: 96.9546% ( 4) 00:08:11.095 15.065 - 15.163: 96.9957% ( 7) 00:08:11.095 15.163 - 15.262: 97.0192% ( 4) 00:08:11.095 15.262 - 15.360: 97.0721% ( 9) 00:08:11.095 15.360 - 15.458: 97.0957% ( 4) 00:08:11.095 15.458 - 15.557: 97.1251% ( 5) 00:08:11.095 15.557 - 15.655: 97.1368% ( 2) 00:08:11.095 15.655 - 15.754: 97.1544% ( 3) 00:08:11.095 15.754 - 15.852: 97.1838% ( 5) 00:08:11.095 15.852 - 15.951: 97.2015% ( 3) 00:08:11.095 15.951 - 16.049: 97.2191% ( 3) 00:08:11.095 16.049 - 16.148: 97.2426% ( 4) 00:08:11.095 16.148 - 16.246: 97.2544% ( 2) 00:08:11.095 16.246 - 16.345: 97.2720% ( 3) 00:08:11.095 16.345 - 16.443: 97.3014% ( 5) 00:08:11.095 16.443 - 16.542: 97.3308% ( 5) 00:08:11.095 16.542 - 16.640: 97.3602% ( 5) 00:08:11.095 16.640 - 16.738: 97.4073% ( 8) 00:08:11.095 16.738 - 16.837: 97.4719% ( 11) 00:08:11.095 16.837 - 16.935: 97.5542% ( 14) 00:08:11.095 16.935 - 17.034: 97.6130% ( 10) 00:08:11.095 17.034 - 17.132: 97.6601% ( 8) 00:08:11.095 17.132 - 17.231: 97.7130% ( 9) 00:08:11.095 17.231 - 17.329: 97.7953% ( 14) 00:08:11.095 17.329 - 17.428: 97.8776% ( 14) 00:08:11.095 17.428 - 17.526: 97.9658% ( 15) 00:08:11.095 17.526 - 17.625: 98.0363% ( 12) 00:08:11.095 17.625 - 17.723: 98.1010% ( 11) 00:08:11.095 17.723 - 17.822: 98.1422% ( 7) 00:08:11.095 17.822 - 17.920: 98.1951% ( 9) 00:08:11.095 17.920 - 18.018: 98.2421% ( 8) 00:08:11.095 18.018 - 18.117: 98.2891% ( 8) 00:08:11.095 18.117 - 18.215: 98.3421% ( 9) 00:08:11.095 18.215 - 18.314: 98.3950% ( 9) 00:08:11.095 18.314 - 18.412: 98.4538% ( 10) 00:08:11.095 18.412 - 18.511: 98.5184% ( 11) 00:08:11.095 18.511 - 18.609: 98.5537% ( 6) 00:08:11.095 18.609 - 18.708: 98.5831% ( 5) 00:08:11.095 18.708 - 18.806: 98.6125% ( 5) 00:08:11.095 18.806 - 18.905: 98.6301% ( 3) 00:08:11.095 18.905 - 19.003: 98.6478% ( 3) 00:08:11.095 19.003 - 19.102: 98.6595% ( 2) 00:08:11.095 19.200 - 19.298: 98.6772% ( 3) 00:08:11.095 19.298 - 19.397: 98.6889% ( 2) 00:08:11.096 19.397 - 19.495: 98.7124% ( 4) 00:08:11.096 19.495 - 19.594: 98.7183% ( 1) 00:08:11.096 19.692 - 19.791: 98.7301% ( 2) 00:08:11.096 19.889 - 19.988: 98.7360% ( 1) 00:08:11.096 19.988 - 20.086: 98.7536% ( 3) 00:08:11.096 20.382 - 20.480: 98.7595% ( 1) 00:08:11.096 20.480 - 20.578: 98.7771% ( 3) 00:08:11.096 20.578 - 20.677: 98.7889% ( 2) 00:08:11.096 20.677 - 20.775: 98.8006% ( 2) 00:08:11.096 20.972 - 21.071: 98.8065% ( 1) 00:08:11.096 21.268 - 21.366: 98.8124% ( 1) 00:08:11.096 21.662 - 21.760: 98.8183% ( 1) 00:08:11.096 22.154 - 22.252: 98.8242% ( 1) 00:08:11.096 22.843 - 22.942: 98.8418% ( 3) 00:08:11.096 22.942 - 23.040: 98.9065% ( 11) 00:08:11.096 23.040 - 23.138: 99.0358% ( 22) 00:08:11.096 23.138 - 23.237: 99.1887% ( 26) 00:08:11.096 23.237 - 23.335: 99.3474% ( 27) 00:08:11.096 23.335 - 23.434: 99.4062% ( 10) 00:08:11.096 23.434 - 23.532: 99.4767% ( 12) 00:08:11.096 23.532 - 23.631: 99.5120% ( 6) 00:08:11.096 23.631 - 23.729: 99.5532% ( 7) 00:08:11.096 23.729 - 23.828: 99.6061% ( 9) 00:08:11.096 23.828 - 23.926: 99.6472% ( 7) 00:08:11.096 23.926 - 24.025: 99.6649% ( 3) 00:08:11.096 24.025 - 24.123: 99.6825% ( 3) 00:08:11.096 24.123 - 24.222: 99.7060% ( 4) 00:08:11.096 24.320 - 24.418: 99.7119% ( 1) 00:08:11.096 24.615 - 24.714: 99.7178% ( 1) 00:08:11.096 24.714 - 24.812: 99.7237% ( 1) 00:08:11.096 25.403 - 25.600: 99.7296% ( 1) 00:08:11.096 25.600 - 25.797: 99.7354% ( 1) 00:08:11.096 26.388 - 26.585: 99.7413% ( 1) 00:08:11.096 30.523 - 30.720: 99.7472% ( 1) 00:08:11.096 30.720 - 30.917: 99.7531% ( 1) 00:08:11.096 30.917 - 31.114: 99.7648% ( 2) 00:08:11.096 31.114 - 31.311: 99.8236% ( 10) 00:08:11.096 31.311 - 31.508: 99.8413% ( 3) 00:08:11.096 31.508 - 31.705: 99.8589% ( 3) 00:08:11.096 31.705 - 31.902: 99.8765% ( 3) 00:08:11.096 31.902 - 32.098: 99.8824% ( 1) 00:08:11.096 32.098 - 32.295: 99.8883% ( 1) 00:08:11.096 32.295 - 32.492: 99.8942% ( 1) 00:08:11.096 32.886 - 33.083: 99.9001% ( 1) 00:08:11.096 33.280 - 33.477: 99.9059% ( 1) 00:08:11.096 34.265 - 34.462: 99.9118% ( 1) 00:08:11.096 35.249 - 35.446: 99.9177% ( 1) 00:08:11.096 36.037 - 36.234: 99.9294% ( 2) 00:08:11.096 36.628 - 36.825: 99.9353% ( 1) 00:08:11.096 38.794 - 38.991: 99.9412% ( 1) 00:08:11.096 45.883 - 46.080: 99.9471% ( 1) 00:08:11.096 46.474 - 46.671: 99.9530% ( 1) 00:08:11.096 51.200 - 51.594: 99.9588% ( 1) 00:08:11.096 54.351 - 54.745: 99.9647% ( 1) 00:08:11.096 58.683 - 59.077: 99.9706% ( 1) 00:08:11.096 59.865 - 60.258: 99.9765% ( 1) 00:08:11.096 72.468 - 72.862: 99.9824% ( 1) 00:08:11.096 83.495 - 83.889: 99.9882% ( 1) 00:08:11.096 209.526 - 211.102: 99.9941% ( 1) 00:08:11.096 293.022 - 294.597: 100.0000% ( 1) 00:08:11.096 00:08:11.096 Complete histogram 00:08:11.096 ================== 00:08:11.096 Range in us Cumulative Count 00:08:11.096 7.089 - 7.138: 0.0059% ( 1) 00:08:11.096 7.138 - 7.188: 0.5409% ( 91) 00:08:11.096 7.188 - 7.237: 5.6441% ( 868) 00:08:11.096 7.237 - 7.286: 23.6581% ( 3064) 00:08:11.096 7.286 - 7.335: 48.8565% ( 4286) 00:08:11.096 7.335 - 7.385: 68.1345% ( 3279) 00:08:11.096 7.385 - 7.434: 79.5520% ( 1942) 00:08:11.096 7.434 - 7.483: 85.8487% ( 1071) 00:08:11.096 7.483 - 7.532: 89.5702% ( 633) 00:08:11.096 7.532 - 7.582: 91.6221% ( 349) 00:08:11.096 7.582 - 7.631: 93.0096% ( 236) 00:08:11.096 7.631 - 7.680: 93.8503% ( 143) 00:08:11.096 7.680 - 7.729: 94.2854% ( 74) 00:08:11.096 7.729 - 7.778: 94.5793% ( 50) 00:08:11.096 7.778 - 7.828: 94.7146% ( 23) 00:08:11.096 7.828 - 7.877: 94.8792% ( 28) 00:08:11.096 7.877 - 7.926: 95.0556% ( 30) 00:08:11.096 7.926 - 7.975: 95.2025% ( 25) 00:08:11.096 7.975 - 8.025: 95.3672% ( 28) 00:08:11.096 8.025 - 8.074: 95.6376% ( 46) 00:08:11.096 8.074 - 8.123: 95.9139% ( 47) 00:08:11.096 8.123 - 8.172: 96.2490% ( 57) 00:08:11.096 8.172 - 8.222: 96.4666% ( 37) 00:08:11.096 8.222 - 8.271: 96.5959% ( 22) 00:08:11.096 8.271 - 8.320: 96.6841% ( 15) 00:08:11.096 8.320 - 8.369: 96.7194% ( 6) 00:08:11.096 8.369 - 8.418: 96.7488% ( 5) 00:08:11.096 8.418 - 8.468: 96.7899% ( 7) 00:08:11.096 8.468 - 8.517: 96.8370% ( 8) 00:08:11.096 8.517 - 8.566: 96.8546% ( 3) 00:08:11.096 8.566 - 8.615: 96.8781% ( 4) 00:08:11.096 8.665 - 8.714: 96.8840% ( 1) 00:08:11.096 8.812 - 8.862: 96.8899% ( 1) 00:08:11.096 8.862 - 8.911: 96.9016% ( 2) 00:08:11.096 8.911 - 8.960: 96.9252% ( 4) 00:08:11.096 9.009 - 9.058: 96.9369% ( 2) 00:08:11.096 9.157 - 9.206: 96.9487% ( 2) 00:08:11.096 9.206 - 9.255: 96.9663% ( 3) 00:08:11.096 9.305 - 9.354: 96.9781% ( 2) 00:08:11.096 9.354 - 9.403: 97.0075% ( 5) 00:08:11.096 9.452 - 9.502: 97.0192% ( 2) 00:08:11.096 9.551 - 9.600: 97.0251% ( 1) 00:08:11.096 9.600 - 9.649: 97.0486% ( 4) 00:08:11.096 9.649 - 9.698: 97.0545% ( 1) 00:08:11.096 9.698 - 9.748: 97.0839% ( 5) 00:08:11.096 9.748 - 9.797: 97.1015% ( 3) 00:08:11.096 9.797 - 9.846: 97.1368% ( 6) 00:08:11.096 9.846 - 9.895: 97.1427% ( 1) 00:08:11.096 9.945 - 9.994: 97.1603% ( 3) 00:08:11.096 9.994 - 10.043: 97.1838% ( 4) 00:08:11.096 10.043 - 10.092: 97.2015% ( 3) 00:08:11.096 10.092 - 10.142: 97.2485% ( 8) 00:08:11.096 10.142 - 10.191: 97.2838% ( 6) 00:08:11.096 10.191 - 10.240: 97.2955% ( 2) 00:08:11.096 10.289 - 10.338: 97.3073% ( 2) 00:08:11.096 10.338 - 10.388: 97.3132% ( 1) 00:08:11.096 10.388 - 10.437: 97.3308% ( 3) 00:08:11.096 10.437 - 10.486: 97.3426% ( 2) 00:08:11.096 10.486 - 10.535: 97.3602% ( 3) 00:08:11.096 10.535 - 10.585: 97.3661% ( 1) 00:08:11.096 10.585 - 10.634: 97.3896% ( 4) 00:08:11.096 10.683 - 10.732: 97.4014% ( 2) 00:08:11.096 10.732 - 10.782: 97.4073% ( 1) 00:08:11.096 10.782 - 10.831: 97.4131% ( 1) 00:08:11.096 10.929 - 10.978: 97.4249% ( 2) 00:08:11.096 10.978 - 11.028: 97.4308% ( 1) 00:08:11.096 11.028 - 11.077: 97.4367% ( 1) 00:08:11.096 11.225 - 11.274: 97.4543% ( 3) 00:08:11.096 11.274 - 11.323: 97.4602% ( 1) 00:08:11.096 11.422 - 11.471: 97.4660% ( 1) 00:08:11.096 11.520 - 11.569: 97.4719% ( 1) 00:08:11.096 11.766 - 11.815: 97.4778% ( 1) 00:08:11.096 11.815 - 11.865: 97.4837% ( 1) 00:08:11.096 11.963 - 12.012: 97.4896% ( 1) 00:08:11.096 12.012 - 12.062: 97.5013% ( 2) 00:08:11.096 12.111 - 12.160: 97.5072% ( 1) 00:08:11.096 12.554 - 12.603: 97.5131% ( 1) 00:08:11.096 12.603 - 12.702: 97.5190% ( 1) 00:08:11.096 12.702 - 12.800: 97.5248% ( 1) 00:08:11.355 12.800 - 12.898: 97.5366% ( 2) 00:08:11.355 12.898 - 12.997: 97.5601% ( 4) 00:08:11.355 12.997 - 13.095: 97.6071% ( 8) 00:08:11.355 13.095 - 13.194: 97.6483% ( 7) 00:08:11.355 13.194 - 13.292: 97.6601% ( 2) 00:08:11.355 13.292 - 13.391: 97.6895% ( 5) 00:08:11.355 13.391 - 13.489: 97.7424% ( 9) 00:08:11.355 13.489 - 13.588: 97.8129% ( 12) 00:08:11.355 13.588 - 13.686: 97.8835% ( 12) 00:08:11.355 13.686 - 13.785: 97.9540% ( 12) 00:08:11.355 13.785 - 13.883: 98.0069% ( 9) 00:08:11.355 13.883 - 13.982: 98.0716% ( 11) 00:08:11.355 13.982 - 14.080: 98.1422% ( 12) 00:08:11.355 14.080 - 14.178: 98.2068% ( 11) 00:08:11.355 14.178 - 14.277: 98.2597% ( 9) 00:08:11.355 14.277 - 14.375: 98.3009% ( 7) 00:08:11.355 14.375 - 14.474: 98.3538% ( 9) 00:08:11.355 14.474 - 14.572: 98.4126% ( 10) 00:08:11.355 14.572 - 14.671: 98.4773% ( 11) 00:08:11.355 14.671 - 14.769: 98.5302% ( 9) 00:08:11.355 14.769 - 14.868: 98.5890% ( 10) 00:08:11.355 14.868 - 14.966: 98.6301% ( 7) 00:08:11.355 14.966 - 15.065: 98.6595% ( 5) 00:08:11.355 15.065 - 15.163: 98.6713% ( 2) 00:08:11.355 15.163 - 15.262: 98.6948% ( 4) 00:08:11.355 15.262 - 15.360: 98.7124% ( 3) 00:08:11.355 15.360 - 15.458: 98.7242% ( 2) 00:08:11.355 15.655 - 15.754: 98.7360% ( 2) 00:08:11.355 15.754 - 15.852: 98.7418% ( 1) 00:08:11.355 15.852 - 15.951: 98.7477% ( 1) 00:08:11.355 15.951 - 16.049: 98.8359% ( 15) 00:08:11.355 16.049 - 16.148: 99.0946% ( 44) 00:08:11.355 16.148 - 16.246: 99.4297% ( 57) 00:08:11.355 16.246 - 16.345: 99.5414% ( 19) 00:08:11.355 16.345 - 16.443: 99.5649% ( 4) 00:08:11.355 16.443 - 16.542: 99.5708% ( 1) 00:08:11.355 16.542 - 16.640: 99.6002% ( 5) 00:08:11.355 16.640 - 16.738: 99.6120% ( 2) 00:08:11.355 16.738 - 16.837: 99.6237% ( 2) 00:08:11.355 16.837 - 16.935: 99.6296% ( 1) 00:08:11.355 16.935 - 17.034: 99.6355% ( 1) 00:08:11.355 17.329 - 17.428: 99.6414% ( 1) 00:08:11.355 17.428 - 17.526: 99.6531% ( 2) 00:08:11.355 17.625 - 17.723: 99.6590% ( 1) 00:08:11.355 18.117 - 18.215: 99.6649% ( 1) 00:08:11.355 18.215 - 18.314: 99.6766% ( 2) 00:08:11.355 18.412 - 18.511: 99.6825% ( 1) 00:08:11.355 18.905 - 19.003: 99.6884% ( 1) 00:08:11.355 19.102 - 19.200: 99.6943% ( 1) 00:08:11.355 19.200 - 19.298: 99.7002% ( 1) 00:08:11.355 19.298 - 19.397: 99.7060% ( 1) 00:08:11.355 19.495 - 19.594: 99.7178% ( 2) 00:08:11.355 19.988 - 20.086: 99.7237% ( 1) 00:08:11.355 20.185 - 20.283: 99.7296% ( 1) 00:08:11.355 20.480 - 20.578: 99.7354% ( 1) 00:08:11.355 20.578 - 20.677: 99.7413% ( 1) 00:08:11.355 20.874 - 20.972: 99.7472% ( 1) 00:08:11.355 21.366 - 21.465: 99.7531% ( 1) 00:08:11.355 21.662 - 21.760: 99.7648% ( 2) 00:08:11.355 21.760 - 21.858: 99.7766% ( 2) 00:08:11.355 21.858 - 21.957: 99.7942% ( 3) 00:08:11.355 21.957 - 22.055: 99.8295% ( 6) 00:08:11.355 22.055 - 22.154: 99.8471% ( 3) 00:08:11.355 22.154 - 22.252: 99.8648% ( 3) 00:08:11.355 22.252 - 22.351: 99.8765% ( 2) 00:08:11.355 22.449 - 22.548: 99.8824% ( 1) 00:08:11.355 22.843 - 22.942: 99.8883% ( 1) 00:08:11.355 23.532 - 23.631: 99.8942% ( 1) 00:08:11.355 23.729 - 23.828: 99.9001% ( 1) 00:08:11.355 24.025 - 24.123: 99.9059% ( 1) 00:08:11.355 24.418 - 24.517: 99.9118% ( 1) 00:08:11.355 27.569 - 27.766: 99.9177% ( 1) 00:08:11.355 27.766 - 27.963: 99.9294% ( 2) 00:08:11.355 28.357 - 28.554: 99.9353% ( 1) 00:08:11.355 28.751 - 28.948: 99.9412% ( 1) 00:08:11.355 29.342 - 29.538: 99.9471% ( 1) 00:08:11.355 30.326 - 30.523: 99.9530% ( 1) 00:08:11.355 39.582 - 39.778: 99.9588% ( 1) 00:08:11.355 50.806 - 51.200: 99.9647% ( 1) 00:08:11.355 51.594 - 51.988: 99.9765% ( 2) 00:08:11.355 57.895 - 58.289: 99.9824% ( 1) 00:08:11.355 60.258 - 60.652: 99.9882% ( 1) 00:08:11.355 223.705 - 225.280: 99.9941% ( 1) 00:08:11.355 237.883 - 239.458: 100.0000% ( 1) 00:08:11.355 00:08:11.355 ************************************ 00:08:11.355 END TEST nvme_overhead 00:08:11.355 ************************************ 00:08:11.355 00:08:11.355 real 0m1.198s 00:08:11.355 user 0m1.056s 00:08:11.355 sys 0m0.096s 00:08:11.355 03:35:03 nvme.nvme_overhead -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:11.355 03:35:03 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:08:11.355 03:35:03 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:11.355 03:35:03 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:11.355 03:35:03 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:11.355 03:35:03 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:11.355 ************************************ 00:08:11.355 START TEST nvme_arbitration 00:08:11.355 ************************************ 00:08:11.355 03:35:03 nvme.nvme_arbitration -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:08:14.646 Initializing NVMe Controllers 00:08:14.646 Attached to 0000:00:10.0 00:08:14.646 Attached to 0000:00:11.0 00:08:14.646 Attached to 0000:00:13.0 00:08:14.646 Attached to 0000:00:12.0 00:08:14.646 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:08:14.646 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:08:14.646 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:08:14.646 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:08:14.646 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:08:14.646 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:08:14.646 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:08:14.646 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:08:14.646 Initialization complete. Launching workers. 00:08:14.646 Starting thread on core 1 with urgent priority queue 00:08:14.646 Starting thread on core 2 with urgent priority queue 00:08:14.646 Starting thread on core 3 with urgent priority queue 00:08:14.646 Starting thread on core 0 with urgent priority queue 00:08:14.646 QEMU NVMe Ctrl (12340 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:08:14.646 QEMU NVMe Ctrl (12342 ) core 0: 810.67 IO/s 123.36 secs/100000 ios 00:08:14.646 QEMU NVMe Ctrl (12341 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:08:14.646 QEMU NVMe Ctrl (12342 ) core 1: 917.33 IO/s 109.01 secs/100000 ios 00:08:14.646 QEMU NVMe Ctrl (12343 ) core 2: 1002.67 IO/s 99.73 secs/100000 ios 00:08:14.646 QEMU NVMe Ctrl (12342 ) core 3: 746.67 IO/s 133.93 secs/100000 ios 00:08:14.646 ======================================================== 00:08:14.646 00:08:14.646 ************************************ 00:08:14.646 END TEST nvme_arbitration 00:08:14.646 ************************************ 00:08:14.646 00:08:14.646 real 0m3.291s 00:08:14.646 user 0m9.187s 00:08:14.646 sys 0m0.112s 00:08:14.646 03:35:06 nvme.nvme_arbitration -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.646 03:35:06 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:08:14.646 03:35:07 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:14.646 03:35:07 nvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:14.646 03:35:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.646 03:35:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.646 ************************************ 00:08:14.646 START TEST nvme_single_aen 00:08:14.646 ************************************ 00:08:14.646 03:35:07 nvme.nvme_single_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:08:14.905 Asynchronous Event Request test 00:08:14.905 Attached to 0000:00:10.0 00:08:14.905 Attached to 0000:00:11.0 00:08:14.905 Attached to 0000:00:13.0 00:08:14.905 Attached to 0000:00:12.0 00:08:14.905 Reset controller to setup AER completions for this process 00:08:14.905 Registering asynchronous event callbacks... 00:08:14.905 Getting orig temperature thresholds of all controllers 00:08:14.905 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.905 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.905 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.905 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:14.905 Setting all controllers temperature threshold low to trigger AER 00:08:14.905 Waiting for all controllers temperature threshold to be set lower 00:08:14.905 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.905 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:14.905 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.905 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:14.905 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.905 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:14.905 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:14.905 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:14.905 Waiting for all controllers to trigger AER and reset threshold 00:08:14.905 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.905 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.905 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.905 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:14.905 Cleaning up... 00:08:14.905 ************************************ 00:08:14.905 END TEST nvme_single_aen 00:08:14.905 ************************************ 00:08:14.905 00:08:14.905 real 0m0.204s 00:08:14.905 user 0m0.071s 00:08:14.905 sys 0m0.092s 00:08:14.905 03:35:07 nvme.nvme_single_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.905 03:35:07 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:08:14.905 03:35:07 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:08:14.905 03:35:07 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:14.905 03:35:07 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.905 03:35:07 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:14.905 ************************************ 00:08:14.905 START TEST nvme_doorbell_aers 00:08:14.905 ************************************ 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1125 -- # nvme_doorbell_aers 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:14.905 03:35:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:15.163 [2024-10-01 03:35:07.566441] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:25.130 Executing: test_write_invalid_db 00:08:25.130 Waiting for AER completion... 00:08:25.130 Failure: test_write_invalid_db 00:08:25.130 00:08:25.130 Executing: test_invalid_db_write_overflow_sq 00:08:25.130 Waiting for AER completion... 00:08:25.130 Failure: test_invalid_db_write_overflow_sq 00:08:25.130 00:08:25.130 Executing: test_invalid_db_write_overflow_cq 00:08:25.130 Waiting for AER completion... 00:08:25.130 Failure: test_invalid_db_write_overflow_cq 00:08:25.130 00:08:25.130 03:35:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:25.130 03:35:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:25.130 [2024-10-01 03:35:17.612680] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:35.098 Executing: test_write_invalid_db 00:08:35.098 Waiting for AER completion... 00:08:35.098 Failure: test_write_invalid_db 00:08:35.098 00:08:35.098 Executing: test_invalid_db_write_overflow_sq 00:08:35.098 Waiting for AER completion... 00:08:35.098 Failure: test_invalid_db_write_overflow_sq 00:08:35.098 00:08:35.098 Executing: test_invalid_db_write_overflow_cq 00:08:35.098 Waiting for AER completion... 00:08:35.098 Failure: test_invalid_db_write_overflow_cq 00:08:35.098 00:08:35.098 03:35:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:35.098 03:35:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:35.098 [2024-10-01 03:35:27.631778] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:45.066 Executing: test_write_invalid_db 00:08:45.066 Waiting for AER completion... 00:08:45.066 Failure: test_write_invalid_db 00:08:45.066 00:08:45.066 Executing: test_invalid_db_write_overflow_sq 00:08:45.066 Waiting for AER completion... 00:08:45.066 Failure: test_invalid_db_write_overflow_sq 00:08:45.066 00:08:45.066 Executing: test_invalid_db_write_overflow_cq 00:08:45.066 Waiting for AER completion... 00:08:45.066 Failure: test_invalid_db_write_overflow_cq 00:08:45.066 00:08:45.066 03:35:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:45.066 03:35:37 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:45.325 [2024-10-01 03:35:37.648785] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 Executing: test_write_invalid_db 00:08:55.305 Waiting for AER completion... 00:08:55.305 Failure: test_write_invalid_db 00:08:55.305 00:08:55.305 Executing: test_invalid_db_write_overflow_sq 00:08:55.305 Waiting for AER completion... 00:08:55.305 Failure: test_invalid_db_write_overflow_sq 00:08:55.305 00:08:55.305 Executing: test_invalid_db_write_overflow_cq 00:08:55.305 Waiting for AER completion... 00:08:55.305 Failure: test_invalid_db_write_overflow_cq 00:08:55.305 00:08:55.305 ************************************ 00:08:55.305 END TEST nvme_doorbell_aers 00:08:55.305 ************************************ 00:08:55.305 00:08:55.305 real 0m40.194s 00:08:55.305 user 0m33.921s 00:08:55.305 sys 0m5.869s 00:08:55.305 03:35:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.305 03:35:47 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:55.305 03:35:47 nvme -- nvme/nvme.sh@97 -- # uname 00:08:55.305 03:35:47 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:55.305 03:35:47 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:55.305 03:35:47 nvme -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:08:55.305 03:35:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.305 03:35:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.305 ************************************ 00:08:55.305 START TEST nvme_multi_aen 00:08:55.305 ************************************ 00:08:55.305 03:35:47 nvme.nvme_multi_aen -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:55.305 [2024-10-01 03:35:47.714266] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 [2024-10-01 03:35:47.714455] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 [2024-10-01 03:35:47.714504] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 [2024-10-01 03:35:47.715605] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 [2024-10-01 03:35:47.715625] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 [2024-10-01 03:35:47.715632] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 [2024-10-01 03:35:47.716703] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 [2024-10-01 03:35:47.716803] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 [2024-10-01 03:35:47.716861] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 [2024-10-01 03:35:47.717960] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 [2024-10-01 03:35:47.718060] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 [2024-10-01 03:35:47.718115] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 63730) is not found. Dropping the request. 00:08:55.305 Child process pid: 64256 00:08:55.564 [Child] Asynchronous Event Request test 00:08:55.564 [Child] Attached to 0000:00:10.0 00:08:55.564 [Child] Attached to 0000:00:11.0 00:08:55.564 [Child] Attached to 0000:00:13.0 00:08:55.564 [Child] Attached to 0000:00:12.0 00:08:55.564 [Child] Registering asynchronous event callbacks... 00:08:55.564 [Child] Getting orig temperature thresholds of all controllers 00:08:55.564 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:55.564 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:55.564 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:55.564 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:55.564 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:55.564 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:55.564 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:55.564 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:55.564 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:55.564 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.564 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.564 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.564 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.564 [Child] Cleaning up... 00:08:55.564 Asynchronous Event Request test 00:08:55.564 Attached to 0000:00:10.0 00:08:55.564 Attached to 0000:00:11.0 00:08:55.564 Attached to 0000:00:13.0 00:08:55.564 Attached to 0000:00:12.0 00:08:55.564 Reset controller to setup AER completions for this process 00:08:55.564 Registering asynchronous event callbacks... 00:08:55.564 Getting orig temperature thresholds of all controllers 00:08:55.564 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:55.564 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:55.564 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:55.564 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:55.564 Setting all controllers temperature threshold low to trigger AER 00:08:55.564 Waiting for all controllers temperature threshold to be set lower 00:08:55.564 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:55.564 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:55.564 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:55.564 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:55.564 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:55.564 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:55.564 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:55.564 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:55.564 Waiting for all controllers to trigger AER and reset threshold 00:08:55.564 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.564 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.564 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.564 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:55.565 Cleaning up... 00:08:55.565 00:08:55.565 real 0m0.430s 00:08:55.565 user 0m0.122s 00:08:55.565 sys 0m0.202s 00:08:55.565 03:35:47 nvme.nvme_multi_aen -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.565 03:35:47 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:55.565 ************************************ 00:08:55.565 END TEST nvme_multi_aen 00:08:55.565 ************************************ 00:08:55.565 03:35:47 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:55.565 03:35:47 nvme -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:55.565 03:35:47 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.565 03:35:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.565 ************************************ 00:08:55.565 START TEST nvme_startup 00:08:55.565 ************************************ 00:08:55.565 03:35:48 nvme.nvme_startup -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:55.823 Initializing NVMe Controllers 00:08:55.823 Attached to 0000:00:10.0 00:08:55.823 Attached to 0000:00:11.0 00:08:55.823 Attached to 0000:00:13.0 00:08:55.823 Attached to 0000:00:12.0 00:08:55.823 Initialization complete. 00:08:55.823 Time used:135787.688 (us). 00:08:55.823 ************************************ 00:08:55.823 END TEST nvme_startup 00:08:55.823 ************************************ 00:08:55.823 00:08:55.823 real 0m0.194s 00:08:55.823 user 0m0.062s 00:08:55.823 sys 0m0.091s 00:08:55.823 03:35:48 nvme.nvme_startup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:55.823 03:35:48 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:55.823 03:35:48 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:55.823 03:35:48 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:55.823 03:35:48 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:55.823 03:35:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:55.823 ************************************ 00:08:55.823 START TEST nvme_multi_secondary 00:08:55.824 ************************************ 00:08:55.824 03:35:48 nvme.nvme_multi_secondary -- common/autotest_common.sh@1125 -- # nvme_multi_secondary 00:08:55.824 03:35:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=64306 00:08:55.824 03:35:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:55.824 03:35:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=64307 00:08:55.824 03:35:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:55.824 03:35:48 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:59.106 Initializing NVMe Controllers 00:08:59.106 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:59.106 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.106 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.106 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.106 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:59.106 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:59.106 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:59.106 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:59.106 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:59.106 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:59.106 Initialization complete. Launching workers. 00:08:59.106 ======================================================== 00:08:59.106 Latency(us) 00:08:59.106 Device Information : IOPS MiB/s Average min max 00:08:59.106 PCIE (0000:00:10.0) NSID 1 from core 1: 7805.17 30.49 2048.56 1005.85 5316.89 00:08:59.106 PCIE (0000:00:11.0) NSID 1 from core 1: 7805.17 30.49 2049.53 995.74 5320.84 00:08:59.106 PCIE (0000:00:13.0) NSID 1 from core 1: 7805.17 30.49 2049.48 980.51 5687.82 00:08:59.106 PCIE (0000:00:12.0) NSID 1 from core 1: 7805.17 30.49 2049.73 981.64 5573.30 00:08:59.106 PCIE (0000:00:12.0) NSID 2 from core 1: 7805.17 30.49 2049.77 980.48 5275.37 00:08:59.106 PCIE (0000:00:12.0) NSID 3 from core 1: 7805.17 30.49 2049.73 960.29 5491.52 00:08:59.106 ======================================================== 00:08:59.106 Total : 46831.05 182.93 2049.47 960.29 5687.82 00:08:59.106 00:08:59.106 Initializing NVMe Controllers 00:08:59.106 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:59.106 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:59.106 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:59.106 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:59.106 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:59.106 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:59.106 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:59.106 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:59.106 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:59.106 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:59.106 Initialization complete. Launching workers. 00:08:59.106 ======================================================== 00:08:59.106 Latency(us) 00:08:59.106 Device Information : IOPS MiB/s Average min max 00:08:59.106 PCIE (0000:00:10.0) NSID 1 from core 2: 3035.14 11.86 5270.38 1275.08 12890.13 00:08:59.106 PCIE (0000:00:11.0) NSID 1 from core 2: 3035.14 11.86 5271.31 1244.42 13157.06 00:08:59.106 PCIE (0000:00:13.0) NSID 1 from core 2: 3035.14 11.86 5270.80 1341.36 13038.03 00:08:59.106 PCIE (0000:00:12.0) NSID 1 from core 2: 3035.14 11.86 5270.74 1193.04 13604.66 00:08:59.106 PCIE (0000:00:12.0) NSID 2 from core 2: 3035.14 11.86 5271.10 1298.44 13255.26 00:08:59.106 PCIE (0000:00:12.0) NSID 3 from core 2: 3035.14 11.86 5271.26 961.86 12960.16 00:08:59.106 ======================================================== 00:08:59.106 Total : 18210.84 71.14 5270.93 961.86 13604.66 00:08:59.106 00:08:59.364 03:35:51 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 64306 00:09:01.265 Initializing NVMe Controllers 00:09:01.265 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:01.265 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:01.265 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:01.265 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:01.265 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:01.265 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:01.265 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:01.265 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:01.265 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:01.265 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:01.265 Initialization complete. Launching workers. 00:09:01.265 ======================================================== 00:09:01.265 Latency(us) 00:09:01.265 Device Information : IOPS MiB/s Average min max 00:09:01.265 PCIE (0000:00:10.0) NSID 1 from core 0: 10786.42 42.13 1482.10 693.10 6094.02 00:09:01.265 PCIE (0000:00:11.0) NSID 1 from core 0: 10786.42 42.13 1482.96 683.63 6140.42 00:09:01.265 PCIE (0000:00:13.0) NSID 1 from core 0: 10786.42 42.13 1482.95 672.90 5844.40 00:09:01.265 PCIE (0000:00:12.0) NSID 1 from core 0: 10786.42 42.13 1482.94 641.38 5484.54 00:09:01.265 PCIE (0000:00:12.0) NSID 2 from core 0: 10789.62 42.15 1482.49 625.36 5589.27 00:09:01.265 PCIE (0000:00:12.0) NSID 3 from core 0: 10786.42 42.13 1482.92 621.81 5720.88 00:09:01.265 ======================================================== 00:09:01.265 Total : 64721.70 252.82 1482.73 621.81 6140.42 00:09:01.265 00:09:01.265 03:35:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 64307 00:09:01.265 03:35:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=64382 00:09:01.265 03:35:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:09:01.265 03:35:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=64383 00:09:01.265 03:35:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:09:01.265 03:35:53 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:09:04.550 Initializing NVMe Controllers 00:09:04.550 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:04.550 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:04.550 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:04.550 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:04.550 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:04.550 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:04.550 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:04.550 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:04.550 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:04.550 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:04.550 Initialization complete. Launching workers. 00:09:04.550 ======================================================== 00:09:04.550 Latency(us) 00:09:04.550 Device Information : IOPS MiB/s Average min max 00:09:04.550 PCIE (0000:00:10.0) NSID 1 from core 0: 7648.27 29.88 2090.66 714.73 9729.98 00:09:04.550 PCIE (0000:00:11.0) NSID 1 from core 0: 7648.27 29.88 2091.65 739.42 10324.47 00:09:04.550 PCIE (0000:00:13.0) NSID 1 from core 0: 7648.27 29.88 2092.11 738.65 11561.53 00:09:04.550 PCIE (0000:00:12.0) NSID 1 from core 0: 7648.27 29.88 2092.49 720.80 10445.04 00:09:04.550 PCIE (0000:00:12.0) NSID 2 from core 0: 7653.60 29.90 2091.53 728.91 11583.03 00:09:04.550 PCIE (0000:00:12.0) NSID 3 from core 0: 7648.27 29.88 2093.02 725.95 10987.11 00:09:04.550 ======================================================== 00:09:04.550 Total : 45894.97 179.28 2091.91 714.73 11583.03 00:09:04.550 00:09:04.550 Initializing NVMe Controllers 00:09:04.550 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:04.550 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:04.550 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:04.550 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:04.550 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:09:04.550 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:09:04.550 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:09:04.550 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:09:04.550 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:09:04.550 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:09:04.550 Initialization complete. Launching workers. 00:09:04.550 ======================================================== 00:09:04.550 Latency(us) 00:09:04.550 Device Information : IOPS MiB/s Average min max 00:09:04.550 PCIE (0000:00:10.0) NSID 1 from core 1: 6918.97 27.03 2311.06 862.33 13353.92 00:09:04.550 PCIE (0000:00:11.0) NSID 1 from core 1: 6918.97 27.03 2312.01 976.14 13017.57 00:09:04.550 PCIE (0000:00:13.0) NSID 1 from core 1: 6918.97 27.03 2311.99 1039.44 12915.08 00:09:04.550 PCIE (0000:00:12.0) NSID 1 from core 1: 6918.97 27.03 2311.96 935.67 12722.96 00:09:04.550 PCIE (0000:00:12.0) NSID 2 from core 1: 6918.97 27.03 2311.94 1024.63 12823.49 00:09:04.550 PCIE (0000:00:12.0) NSID 3 from core 1: 6918.97 27.03 2311.91 1025.46 13317.10 00:09:04.550 ======================================================== 00:09:04.550 Total : 41513.85 162.16 2311.81 862.33 13353.92 00:09:04.550 00:09:06.467 Initializing NVMe Controllers 00:09:06.467 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:06.467 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:06.467 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:06.467 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:06.467 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:09:06.467 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:09:06.467 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:09:06.467 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:09:06.467 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:09:06.467 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:09:06.467 Initialization complete. Launching workers. 00:09:06.467 ======================================================== 00:09:06.467 Latency(us) 00:09:06.467 Device Information : IOPS MiB/s Average min max 00:09:06.467 PCIE (0000:00:10.0) NSID 1 from core 2: 3006.91 11.75 5319.37 814.63 26475.46 00:09:06.467 PCIE (0000:00:11.0) NSID 1 from core 2: 3006.91 11.75 5321.25 786.85 21778.87 00:09:06.467 PCIE (0000:00:13.0) NSID 1 from core 2: 3006.91 11.75 5320.95 830.11 22257.90 00:09:06.467 PCIE (0000:00:12.0) NSID 1 from core 2: 3006.91 11.75 5321.15 838.91 20709.18 00:09:06.467 PCIE (0000:00:12.0) NSID 2 from core 2: 3006.91 11.75 5320.80 830.75 25942.22 00:09:06.467 PCIE (0000:00:12.0) NSID 3 from core 2: 3006.91 11.75 5320.73 828.03 22532.81 00:09:06.467 ======================================================== 00:09:06.467 Total : 18041.45 70.47 5320.71 786.85 26475.46 00:09:06.467 00:09:06.467 ************************************ 00:09:06.467 END TEST nvme_multi_secondary 00:09:06.467 ************************************ 00:09:06.467 03:35:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 64382 00:09:06.467 03:35:58 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 64383 00:09:06.467 00:09:06.467 real 0m10.609s 00:09:06.467 user 0m18.356s 00:09:06.467 sys 0m0.639s 00:09:06.467 03:35:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.467 03:35:58 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:09:06.467 03:35:58 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:09:06.467 03:35:58 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:09:06.467 03:35:58 nvme -- common/autotest_common.sh@1089 -- # [[ -e /proc/63327 ]] 00:09:06.467 03:35:58 nvme -- common/autotest_common.sh@1090 -- # kill 63327 00:09:06.467 03:35:58 nvme -- common/autotest_common.sh@1091 -- # wait 63327 00:09:06.467 [2024-10-01 03:35:58.898257] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.467 [2024-10-01 03:35:58.898512] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.467 [2024-10-01 03:35:58.898553] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.467 [2024-10-01 03:35:58.898575] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.467 [2024-10-01 03:35:58.901969] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.467 [2024-10-01 03:35:58.902167] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.467 [2024-10-01 03:35:58.902182] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.467 [2024-10-01 03:35:58.902193] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.467 [2024-10-01 03:35:58.904243] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.467 [2024-10-01 03:35:58.904282] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.467 [2024-10-01 03:35:58.904293] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.467 [2024-10-01 03:35:58.904303] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.468 [2024-10-01 03:35:58.906373] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.468 [2024-10-01 03:35:58.906417] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.468 [2024-10-01 03:35:58.906429] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.468 [2024-10-01 03:35:58.906439] nvme_pcie_common.c: 296:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64255) is not found. Dropping the request. 00:09:06.728 03:35:59 nvme -- common/autotest_common.sh@1093 -- # rm -f /var/run/spdk_stub0 00:09:06.728 03:35:59 nvme -- common/autotest_common.sh@1097 -- # echo 2 00:09:06.728 03:35:59 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:06.728 03:35:59 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:06.728 03:35:59 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.728 03:35:59 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:06.728 ************************************ 00:09:06.728 START TEST bdev_nvme_reset_stuck_adm_cmd 00:09:06.728 ************************************ 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:09:06.728 * Looking for test storage... 00:09:06.728 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lcov --version 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.728 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:06.728 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.729 --rc genhtml_branch_coverage=1 00:09:06.729 --rc genhtml_function_coverage=1 00:09:06.729 --rc genhtml_legend=1 00:09:06.729 --rc geninfo_all_blocks=1 00:09:06.729 --rc geninfo_unexecuted_blocks=1 00:09:06.729 00:09:06.729 ' 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:06.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.729 --rc genhtml_branch_coverage=1 00:09:06.729 --rc genhtml_function_coverage=1 00:09:06.729 --rc genhtml_legend=1 00:09:06.729 --rc geninfo_all_blocks=1 00:09:06.729 --rc geninfo_unexecuted_blocks=1 00:09:06.729 00:09:06.729 ' 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:06.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.729 --rc genhtml_branch_coverage=1 00:09:06.729 --rc genhtml_function_coverage=1 00:09:06.729 --rc genhtml_legend=1 00:09:06.729 --rc geninfo_all_blocks=1 00:09:06.729 --rc geninfo_unexecuted_blocks=1 00:09:06.729 00:09:06.729 ' 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:06.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.729 --rc genhtml_branch_coverage=1 00:09:06.729 --rc genhtml_function_coverage=1 00:09:06.729 --rc genhtml_legend=1 00:09:06.729 --rc geninfo_all_blocks=1 00:09:06.729 --rc geninfo_unexecuted_blocks=1 00:09:06.729 00:09:06.729 ' 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:06.729 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=64539 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 64539 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@831 -- # '[' -z 64539 ']' 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.989 03:35:59 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:06.989 [2024-10-01 03:35:59.388736] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:06.989 [2024-10-01 03:35:59.388858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64539 ] 00:09:07.250 [2024-10-01 03:35:59.551029] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:07.250 [2024-10-01 03:35:59.736805] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:07.250 [2024-10-01 03:35:59.737120] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.250 [2024-10-01 03:35:59.737436] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 3 00:09:07.250 [2024-10-01 03:35:59.737547] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.821 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.821 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # return 0 00:09:07.821 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:09:07.822 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.822 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:08.082 nvme0n1 00:09:08.082 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.082 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:09:08.082 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_EyNOR.txt 00:09:08.082 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:09:08.083 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.083 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:08.083 true 00:09:08.083 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.083 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:09:08.083 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1727753760 00:09:08.083 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=64562 00:09:08.083 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:09:08.083 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:09:08.083 03:36:00 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:09:10.000 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:09:10.000 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.000 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:10.000 [2024-10-01 03:36:02.414484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:09:10.000 [2024-10-01 03:36:02.414768] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:09:10.000 [2024-10-01 03:36:02.414794] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:09:10.000 [2024-10-01 03:36:02.414809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:10.000 [2024-10-01 03:36:02.418589] bdev_nvme.c:2183:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 64562 00:09:10.001 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 64562 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 64562 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_EyNOR.txt 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_EyNOR.txt 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 64539 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@950 -- # '[' -z 64539 ']' 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # kill -0 64539 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # uname 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64539 00:09:10.001 killing process with pid 64539 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64539' 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@969 -- # kill 64539 00:09:10.001 03:36:02 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@974 -- # wait 64539 00:09:11.905 03:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:09:11.905 03:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:09:11.905 00:09:11.905 real 0m4.984s 00:09:11.905 user 0m17.294s 00:09:11.905 sys 0m0.504s 00:09:11.905 03:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:11.905 03:36:04 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:09:11.905 ************************************ 00:09:11.905 END TEST bdev_nvme_reset_stuck_adm_cmd 00:09:11.905 ************************************ 00:09:11.905 03:36:04 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:09:11.905 03:36:04 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:09:11.905 03:36:04 nvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:11.905 03:36:04 nvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:11.905 03:36:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.905 ************************************ 00:09:11.905 START TEST nvme_fio 00:09:11.905 ************************************ 00:09:11.905 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1125 -- # nvme_fio_test 00:09:11.905 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:09:11.905 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:09:11.905 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:09:11.905 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:11.905 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:09:11.905 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:11.905 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:11.905 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:11.905 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:11.905 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:11.905 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:09:11.905 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:09:11.905 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:11.905 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:11.905 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:11.905 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:11.905 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:12.164 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:12.164 03:36:04 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:12.164 03:36:04 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:09:12.422 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:12.422 fio-3.35 00:09:12.422 Starting 1 thread 00:09:18.996 00:09:18.996 test: (groupid=0, jobs=1): err= 0: pid=64702: Tue Oct 1 03:36:10 2024 00:09:18.996 read: IOPS=24.1k, BW=94.2MiB/s (98.8MB/s)(188MiB/2001msec) 00:09:18.996 slat (nsec): min=3372, max=71371, avg=5095.62, stdev=2319.58 00:09:18.996 clat (usec): min=529, max=7831, avg=2652.96, stdev=804.63 00:09:18.996 lat (usec): min=539, max=7842, avg=2658.05, stdev=806.08 00:09:18.996 clat percentiles (usec): 00:09:18.996 | 1.00th=[ 1778], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:09:18.996 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:18.996 | 70.00th=[ 2474], 80.00th=[ 2573], 90.00th=[ 3228], 95.00th=[ 4883], 00:09:18.996 | 99.00th=[ 6128], 99.50th=[ 6325], 99.90th=[ 6849], 99.95th=[ 7308], 00:09:18.996 | 99.99th=[ 7767] 00:09:18.996 bw ( KiB/s): min=94946, max=100336, per=100.00%, avg=97070.00, stdev=2870.74, samples=3 00:09:18.996 iops : min=23736, max=25084, avg=24267.33, stdev=717.87, samples=3 00:09:18.996 write: IOPS=23.9k, BW=93.5MiB/s (98.1MB/s)(187MiB/2001msec); 0 zone resets 00:09:18.996 slat (nsec): min=3472, max=76805, avg=5357.30, stdev=2365.90 00:09:18.996 clat (usec): min=481, max=7808, avg=2656.64, stdev=806.49 00:09:18.996 lat (usec): min=492, max=7820, avg=2662.00, stdev=807.97 00:09:18.996 clat percentiles (usec): 00:09:18.996 | 1.00th=[ 1795], 5.00th=[ 2180], 10.00th=[ 2278], 20.00th=[ 2343], 00:09:18.996 | 30.00th=[ 2376], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:18.996 | 70.00th=[ 2507], 80.00th=[ 2573], 90.00th=[ 3294], 95.00th=[ 4883], 00:09:18.996 | 99.00th=[ 6063], 99.50th=[ 6325], 99.90th=[ 6783], 99.95th=[ 7242], 00:09:18.996 | 99.99th=[ 7701] 00:09:18.996 bw ( KiB/s): min=94882, max=99728, per=100.00%, avg=97171.33, stdev=2434.04, samples=3 00:09:18.996 iops : min=23720, max=24932, avg=24292.67, stdev=608.74, samples=3 00:09:18.996 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:09:18.996 lat (msec) : 2=2.07%, 4=90.42%, 10=7.49% 00:09:18.996 cpu : usr=99.10%, sys=0.15%, ctx=3, majf=0, minf=608 00:09:18.996 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:18.996 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.996 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.996 issued rwts: total=48243,47912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.996 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.996 00:09:18.996 Run status group 0 (all jobs): 00:09:18.996 READ: bw=94.2MiB/s (98.8MB/s), 94.2MiB/s-94.2MiB/s (98.8MB/s-98.8MB/s), io=188MiB (198MB), run=2001-2001msec 00:09:18.996 WRITE: bw=93.5MiB/s (98.1MB/s), 93.5MiB/s-93.5MiB/s (98.1MB/s-98.1MB/s), io=187MiB (196MB), run=2001-2001msec 00:09:18.996 ----------------------------------------------------- 00:09:18.996 Suppressions used: 00:09:18.996 count bytes template 00:09:18.996 1 32 /usr/src/fio/parse.c 00:09:18.996 1 8 libtcmalloc_minimal.so 00:09:18.996 ----------------------------------------------------- 00:09:18.996 00:09:18.996 03:36:11 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:18.996 03:36:11 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:18.996 03:36:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:18.996 03:36:11 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:18.996 03:36:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:18.996 03:36:11 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:18.996 03:36:11 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:18.996 03:36:11 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:18.996 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:18.996 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:18.996 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:18.997 03:36:11 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:09:19.255 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:19.255 fio-3.35 00:09:19.255 Starting 1 thread 00:09:25.832 00:09:25.832 test: (groupid=0, jobs=1): err= 0: pid=64758: Tue Oct 1 03:36:18 2024 00:09:25.832 read: IOPS=24.3k, BW=94.8MiB/s (99.5MB/s)(190MiB/2001msec) 00:09:25.832 slat (usec): min=3, max=118, avg= 4.95, stdev= 2.24 00:09:25.832 clat (usec): min=263, max=9207, avg=2635.11, stdev=833.05 00:09:25.832 lat (usec): min=268, max=9211, avg=2640.06, stdev=834.32 00:09:25.832 clat percentiles (usec): 00:09:25.832 | 1.00th=[ 1631], 5.00th=[ 2073], 10.00th=[ 2180], 20.00th=[ 2278], 00:09:25.832 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2376], 60.00th=[ 2442], 00:09:25.832 | 70.00th=[ 2474], 80.00th=[ 2638], 90.00th=[ 3359], 95.00th=[ 4752], 00:09:25.832 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 7898], 99.95th=[ 8160], 00:09:25.832 | 99.99th=[ 8586] 00:09:25.832 bw ( KiB/s): min=91200, max=101528, per=98.43%, avg=95602.67, stdev=5329.71, samples=3 00:09:25.832 iops : min=22800, max=25382, avg=23900.67, stdev=1332.43, samples=3 00:09:25.832 write: IOPS=24.1k, BW=94.3MiB/s (98.8MB/s)(189MiB/2001msec); 0 zone resets 00:09:25.832 slat (usec): min=3, max=147, avg= 5.16, stdev= 2.28 00:09:25.832 clat (usec): min=212, max=9116, avg=2634.22, stdev=822.40 00:09:25.832 lat (usec): min=216, max=9120, avg=2639.38, stdev=823.62 00:09:25.832 clat percentiles (usec): 00:09:25.832 | 1.00th=[ 1614], 5.00th=[ 2073], 10.00th=[ 2212], 20.00th=[ 2278], 00:09:25.832 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2442], 00:09:25.832 | 70.00th=[ 2474], 80.00th=[ 2638], 90.00th=[ 3326], 95.00th=[ 4686], 00:09:25.832 | 99.00th=[ 6128], 99.50th=[ 6456], 99.90th=[ 7963], 99.95th=[ 8160], 00:09:25.832 | 99.99th=[ 8586] 00:09:25.832 bw ( KiB/s): min=92080, max=101424, per=99.10%, avg=95653.33, stdev=5044.68, samples=3 00:09:25.832 iops : min=23020, max=25356, avg=23913.33, stdev=1261.17, samples=3 00:09:25.832 lat (usec) : 250=0.01%, 500=0.01%, 750=0.02%, 1000=0.09% 00:09:25.832 lat (msec) : 2=3.33%, 4=89.45%, 10=7.11% 00:09:25.832 cpu : usr=98.85%, sys=0.15%, ctx=37, majf=0, minf=607 00:09:25.832 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:25.832 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:25.832 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:25.832 issued rwts: total=48587,48287,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:25.832 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:25.832 00:09:25.832 Run status group 0 (all jobs): 00:09:25.832 READ: bw=94.8MiB/s (99.5MB/s), 94.8MiB/s-94.8MiB/s (99.5MB/s-99.5MB/s), io=190MiB (199MB), run=2001-2001msec 00:09:25.832 WRITE: bw=94.3MiB/s (98.8MB/s), 94.3MiB/s-94.3MiB/s (98.8MB/s-98.8MB/s), io=189MiB (198MB), run=2001-2001msec 00:09:25.832 ----------------------------------------------------- 00:09:25.832 Suppressions used: 00:09:25.832 count bytes template 00:09:25.832 1 32 /usr/src/fio/parse.c 00:09:25.832 1 8 libtcmalloc_minimal.so 00:09:25.832 ----------------------------------------------------- 00:09:25.832 00:09:25.832 03:36:18 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:25.832 03:36:18 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:25.832 03:36:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:25.832 03:36:18 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:26.090 03:36:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:26.090 03:36:18 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:26.350 03:36:18 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:26.350 03:36:18 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:26.350 03:36:18 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:09:26.350 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:26.350 fio-3.35 00:09:26.350 Starting 1 thread 00:09:34.462 00:09:34.462 test: (groupid=0, jobs=1): err= 0: pid=64819: Tue Oct 1 03:36:25 2024 00:09:34.462 read: IOPS=24.3k, BW=95.0MiB/s (99.6MB/s)(190MiB/2001msec) 00:09:34.462 slat (usec): min=3, max=213, avg= 5.06, stdev= 2.68 00:09:34.462 clat (usec): min=787, max=8793, avg=2628.51, stdev=858.70 00:09:34.462 lat (usec): min=800, max=8797, avg=2633.57, stdev=860.17 00:09:34.462 clat percentiles (usec): 00:09:34.462 | 1.00th=[ 1663], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2278], 00:09:34.462 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2343], 60.00th=[ 2376], 00:09:34.462 | 70.00th=[ 2442], 80.00th=[ 2606], 90.00th=[ 3490], 95.00th=[ 4817], 00:09:34.462 | 99.00th=[ 6325], 99.50th=[ 7046], 99.90th=[ 7767], 99.95th=[ 7898], 00:09:34.462 | 99.99th=[ 8586] 00:09:34.462 bw ( KiB/s): min=92448, max=101400, per=100.00%, avg=98400.00, stdev=5154.64, samples=3 00:09:34.462 iops : min=23112, max=25350, avg=24600.00, stdev=1288.66, samples=3 00:09:34.462 write: IOPS=24.2k, BW=94.4MiB/s (99.0MB/s)(189MiB/2001msec); 0 zone resets 00:09:34.462 slat (usec): min=3, max=105, avg= 5.29, stdev= 2.30 00:09:34.462 clat (usec): min=653, max=8837, avg=2629.32, stdev=848.10 00:09:34.462 lat (usec): min=666, max=8842, avg=2634.61, stdev=849.53 00:09:34.462 clat percentiles (usec): 00:09:34.462 | 1.00th=[ 1680], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2278], 00:09:34.462 | 30.00th=[ 2311], 40.00th=[ 2343], 50.00th=[ 2376], 60.00th=[ 2376], 00:09:34.462 | 70.00th=[ 2442], 80.00th=[ 2606], 90.00th=[ 3490], 95.00th=[ 4752], 00:09:34.462 | 99.00th=[ 6194], 99.50th=[ 6980], 99.90th=[ 7832], 99.95th=[ 8094], 00:09:34.462 | 99.99th=[ 8586] 00:09:34.462 bw ( KiB/s): min=92744, max=101344, per=100.00%, avg=98432.00, stdev=4926.42, samples=3 00:09:34.462 iops : min=23186, max=25336, avg=24608.00, stdev=1231.61, samples=3 00:09:34.462 lat (usec) : 750=0.01%, 1000=0.04% 00:09:34.462 lat (msec) : 2=2.86%, 4=89.45%, 10=7.65% 00:09:34.462 cpu : usr=98.75%, sys=0.35%, ctx=30, majf=0, minf=607 00:09:34.462 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:34.462 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:34.462 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:34.462 issued rwts: total=48672,48356,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:34.462 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:34.462 00:09:34.462 Run status group 0 (all jobs): 00:09:34.462 READ: bw=95.0MiB/s (99.6MB/s), 95.0MiB/s-95.0MiB/s (99.6MB/s-99.6MB/s), io=190MiB (199MB), run=2001-2001msec 00:09:34.462 WRITE: bw=94.4MiB/s (99.0MB/s), 94.4MiB/s-94.4MiB/s (99.0MB/s-99.0MB/s), io=189MiB (198MB), run=2001-2001msec 00:09:34.462 ----------------------------------------------------- 00:09:34.462 Suppressions used: 00:09:34.462 count bytes template 00:09:34.462 1 32 /usr/src/fio/parse.c 00:09:34.462 1 8 libtcmalloc_minimal.so 00:09:34.462 ----------------------------------------------------- 00:09:34.462 00:09:34.462 03:36:26 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:34.462 03:36:26 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:09:34.462 03:36:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:34.462 03:36:26 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:09:34.462 03:36:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:09:34.462 03:36:26 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:34.462 03:36:26 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:09:34.462 03:36:26 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # break 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:09:34.462 03:36:26 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:09:34.462 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:09:34.462 fio-3.35 00:09:34.462 Starting 1 thread 00:09:44.432 00:09:44.432 test: (groupid=0, jobs=1): err= 0: pid=64874: Tue Oct 1 03:36:35 2024 00:09:44.432 read: IOPS=23.4k, BW=91.3MiB/s (95.7MB/s)(183MiB/2001msec) 00:09:44.432 slat (usec): min=3, max=220, avg= 5.37, stdev= 3.20 00:09:44.432 clat (usec): min=199, max=10681, avg=2737.01, stdev=893.60 00:09:44.432 lat (usec): min=205, max=10723, avg=2742.38, stdev=895.53 00:09:44.432 clat percentiles (usec): 00:09:44.432 | 1.00th=[ 1582], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:44.432 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2474], 00:09:44.432 | 70.00th=[ 2573], 80.00th=[ 2900], 90.00th=[ 3949], 95.00th=[ 4883], 00:09:44.432 | 99.00th=[ 6194], 99.50th=[ 6325], 99.90th=[ 8029], 99.95th=[ 8717], 00:09:44.432 | 99.99th=[10421] 00:09:44.432 bw ( KiB/s): min=88032, max=96760, per=99.33%, avg=92845.33, stdev=4432.85, samples=3 00:09:44.432 iops : min=22008, max=24190, avg=23211.33, stdev=1108.21, samples=3 00:09:44.432 write: IOPS=23.2k, BW=90.7MiB/s (95.1MB/s)(182MiB/2001msec); 0 zone resets 00:09:44.432 slat (usec): min=3, max=120, avg= 5.68, stdev= 3.09 00:09:44.432 clat (usec): min=209, max=10557, avg=2738.98, stdev=893.53 00:09:44.432 lat (usec): min=214, max=10565, avg=2744.66, stdev=895.39 00:09:44.432 clat percentiles (usec): 00:09:44.432 | 1.00th=[ 1598], 5.00th=[ 2089], 10.00th=[ 2212], 20.00th=[ 2311], 00:09:44.432 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2474], 00:09:44.432 | 70.00th=[ 2573], 80.00th=[ 2900], 90.00th=[ 3949], 95.00th=[ 4948], 00:09:44.432 | 99.00th=[ 6194], 99.50th=[ 6325], 99.90th=[ 7963], 99.95th=[ 8848], 00:09:44.432 | 99.99th=[10290] 00:09:44.432 bw ( KiB/s): min=87496, max=96040, per=100.00%, avg=92962.67, stdev=4746.75, samples=3 00:09:44.432 iops : min=21874, max=24010, avg=23240.67, stdev=1186.69, samples=3 00:09:44.432 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.08% 00:09:44.432 lat (msec) : 2=3.22%, 4=87.00%, 10=9.66%, 20=0.02% 00:09:44.432 cpu : usr=98.85%, sys=0.25%, ctx=3, majf=0, minf=606 00:09:44.432 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:09:44.432 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:44.432 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:44.432 issued rwts: total=46758,46467,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:44.432 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:44.432 00:09:44.432 Run status group 0 (all jobs): 00:09:44.432 READ: bw=91.3MiB/s (95.7MB/s), 91.3MiB/s-91.3MiB/s (95.7MB/s-95.7MB/s), io=183MiB (192MB), run=2001-2001msec 00:09:44.432 WRITE: bw=90.7MiB/s (95.1MB/s), 90.7MiB/s-90.7MiB/s (95.1MB/s-95.1MB/s), io=182MiB (190MB), run=2001-2001msec 00:09:44.432 ----------------------------------------------------- 00:09:44.432 Suppressions used: 00:09:44.432 count bytes template 00:09:44.432 1 32 /usr/src/fio/parse.c 00:09:44.432 1 8 libtcmalloc_minimal.so 00:09:44.432 ----------------------------------------------------- 00:09:44.432 00:09:44.432 03:36:35 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:09:44.432 03:36:35 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:09:44.432 00:09:44.432 real 0m31.521s 00:09:44.432 user 0m16.679s 00:09:44.432 sys 0m28.351s 00:09:44.432 03:36:35 nvme.nvme_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.432 03:36:35 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:09:44.432 ************************************ 00:09:44.432 END TEST nvme_fio 00:09:44.432 ************************************ 00:09:44.432 00:09:44.432 real 1m41.779s 00:09:44.432 user 3m38.452s 00:09:44.432 sys 0m39.312s 00:09:44.432 03:36:35 nvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.432 03:36:35 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:44.432 ************************************ 00:09:44.432 END TEST nvme 00:09:44.432 ************************************ 00:09:44.432 03:36:35 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:09:44.432 03:36:35 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:44.432 03:36:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:44.432 03:36:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.432 03:36:35 -- common/autotest_common.sh@10 -- # set +x 00:09:44.432 ************************************ 00:09:44.432 START TEST nvme_scc 00:09:44.432 ************************************ 00:09:44.432 03:36:35 nvme_scc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:09:44.432 * Looking for test storage... 00:09:44.432 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:44.432 03:36:35 nvme_scc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:44.432 03:36:35 nvme_scc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:44.432 03:36:35 nvme_scc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:44.432 03:36:35 nvme_scc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@345 -- # : 1 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:44.432 03:36:35 nvme_scc -- scripts/common.sh@368 -- # return 0 00:09:44.432 03:36:35 nvme_scc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:44.432 03:36:35 nvme_scc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:44.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.433 --rc genhtml_branch_coverage=1 00:09:44.433 --rc genhtml_function_coverage=1 00:09:44.433 --rc genhtml_legend=1 00:09:44.433 --rc geninfo_all_blocks=1 00:09:44.433 --rc geninfo_unexecuted_blocks=1 00:09:44.433 00:09:44.433 ' 00:09:44.433 03:36:35 nvme_scc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:44.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.433 --rc genhtml_branch_coverage=1 00:09:44.433 --rc genhtml_function_coverage=1 00:09:44.433 --rc genhtml_legend=1 00:09:44.433 --rc geninfo_all_blocks=1 00:09:44.433 --rc geninfo_unexecuted_blocks=1 00:09:44.433 00:09:44.433 ' 00:09:44.433 03:36:35 nvme_scc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:44.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.433 --rc genhtml_branch_coverage=1 00:09:44.433 --rc genhtml_function_coverage=1 00:09:44.433 --rc genhtml_legend=1 00:09:44.433 --rc geninfo_all_blocks=1 00:09:44.433 --rc geninfo_unexecuted_blocks=1 00:09:44.433 00:09:44.433 ' 00:09:44.433 03:36:35 nvme_scc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:44.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:44.433 --rc genhtml_branch_coverage=1 00:09:44.433 --rc genhtml_function_coverage=1 00:09:44.433 --rc genhtml_legend=1 00:09:44.433 --rc geninfo_all_blocks=1 00:09:44.433 --rc geninfo_unexecuted_blocks=1 00:09:44.433 00:09:44.433 ' 00:09:44.433 03:36:35 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.433 03:36:35 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:09:44.433 03:36:35 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.433 03:36:35 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.433 03:36:35 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.433 03:36:35 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.433 03:36:35 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.433 03:36:35 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.433 03:36:35 nvme_scc -- paths/export.sh@5 -- # export PATH 00:09:44.433 03:36:35 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:44.433 03:36:35 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:09:44.433 03:36:35 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:44.433 03:36:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:09:44.433 03:36:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:09:44.433 03:36:35 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:09:44.433 03:36:35 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:44.433 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:44.433 Waiting for block devices as requested 00:09:44.433 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.433 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.433 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:44.433 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:49.710 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:49.710 03:36:41 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:49.710 03:36:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:49.710 03:36:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:49.710 03:36:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.710 03:36:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.710 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.711 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:49.712 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.713 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:49.714 03:36:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:49.714 03:36:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:49.714 03:36:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:49.715 03:36:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.715 03:36:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:49.715 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.716 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.717 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.718 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.719 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:49.720 03:36:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:49.720 03:36:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:49.720 03:36:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.720 03:36:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.720 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:49.721 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.722 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.723 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:49.724 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:49.725 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.726 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:49.727 03:36:41 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:49.727 03:36:41 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:49.727 03:36:41 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:49.727 03:36:41 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:49.727 03:36:41 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.728 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.729 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:49.730 03:36:41 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:49.731 03:36:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:49.731 03:36:41 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:49.731 03:36:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:49.731 03:36:41 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:49.731 03:36:41 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:49.990 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:50.557 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.557 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.557 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.557 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:50.557 03:36:42 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:50.557 03:36:42 nvme_scc -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:50.557 03:36:42 nvme_scc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.557 03:36:42 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:50.557 ************************************ 00:09:50.557 START TEST nvme_simple_copy 00:09:50.557 ************************************ 00:09:50.557 03:36:42 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:50.816 Initializing NVMe Controllers 00:09:50.816 Attaching to 0000:00:10.0 00:09:50.816 Controller supports SCC. Attached to 0000:00:10.0 00:09:50.816 Namespace ID: 1 size: 6GB 00:09:50.816 Initialization complete. 00:09:50.816 00:09:50.816 Controller QEMU NVMe Ctrl (12340 ) 00:09:50.816 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:50.816 Namespace Block Size:4096 00:09:50.816 Writing LBAs 0 to 63 with Random Data 00:09:50.816 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:50.816 LBAs matching Written Data: 64 00:09:50.816 00:09:50.816 real 0m0.244s 00:09:50.816 user 0m0.082s 00:09:50.816 sys 0m0.061s 00:09:50.816 ************************************ 00:09:50.816 END TEST nvme_simple_copy 00:09:50.816 ************************************ 00:09:50.816 03:36:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.816 03:36:43 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:50.816 ************************************ 00:09:50.816 END TEST nvme_scc 00:09:50.816 ************************************ 00:09:50.816 00:09:50.816 real 0m7.475s 00:09:50.816 user 0m1.046s 00:09:50.816 sys 0m1.299s 00:09:50.816 03:36:43 nvme_scc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:50.816 03:36:43 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:50.816 03:36:43 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:50.816 03:36:43 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:50.816 03:36:43 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:50.816 03:36:43 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:50.816 03:36:43 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:50.816 03:36:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:50.816 03:36:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:50.816 03:36:43 -- common/autotest_common.sh@10 -- # set +x 00:09:50.816 ************************************ 00:09:50.816 START TEST nvme_fdp 00:09:50.816 ************************************ 00:09:50.816 03:36:43 nvme_fdp -- common/autotest_common.sh@1125 -- # test/nvme/nvme_fdp.sh 00:09:50.816 * Looking for test storage... 00:09:50.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:50.816 03:36:43 nvme_fdp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:50.816 03:36:43 nvme_fdp -- common/autotest_common.sh@1681 -- # lcov --version 00:09:50.816 03:36:43 nvme_fdp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:51.075 03:36:43 nvme_fdp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:51.075 03:36:43 nvme_fdp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:51.075 03:36:43 nvme_fdp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.075 --rc genhtml_branch_coverage=1 00:09:51.075 --rc genhtml_function_coverage=1 00:09:51.075 --rc genhtml_legend=1 00:09:51.075 --rc geninfo_all_blocks=1 00:09:51.075 --rc geninfo_unexecuted_blocks=1 00:09:51.075 00:09:51.075 ' 00:09:51.075 03:36:43 nvme_fdp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.075 --rc genhtml_branch_coverage=1 00:09:51.075 --rc genhtml_function_coverage=1 00:09:51.075 --rc genhtml_legend=1 00:09:51.075 --rc geninfo_all_blocks=1 00:09:51.075 --rc geninfo_unexecuted_blocks=1 00:09:51.075 00:09:51.075 ' 00:09:51.075 03:36:43 nvme_fdp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.075 --rc genhtml_branch_coverage=1 00:09:51.075 --rc genhtml_function_coverage=1 00:09:51.075 --rc genhtml_legend=1 00:09:51.075 --rc geninfo_all_blocks=1 00:09:51.075 --rc geninfo_unexecuted_blocks=1 00:09:51.075 00:09:51.075 ' 00:09:51.075 03:36:43 nvme_fdp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:51.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:51.075 --rc genhtml_branch_coverage=1 00:09:51.075 --rc genhtml_function_coverage=1 00:09:51.075 --rc genhtml_legend=1 00:09:51.075 --rc geninfo_all_blocks=1 00:09:51.075 --rc geninfo_unexecuted_blocks=1 00:09:51.075 00:09:51.075 ' 00:09:51.075 03:36:43 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:51.075 03:36:43 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:51.075 03:36:43 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.075 03:36:43 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.075 03:36:43 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.075 03:36:43 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:51.075 03:36:43 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:51.075 03:36:43 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:51.075 03:36:43 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:51.075 03:36:43 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:51.333 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:51.333 Waiting for block devices as requested 00:09:51.333 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.592 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.592 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:51.592 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:56.882 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:56.882 03:36:49 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:56.882 03:36:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:56.882 03:36:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:56.882 03:36:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.882 03:36:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.882 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.883 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:56.884 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.885 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:56.886 03:36:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:56.886 03:36:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:56.886 03:36:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:56.887 03:36:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.887 03:36:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:56.887 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.888 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.889 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.890 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:56.891 03:36:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:56.891 03:36:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:56.891 03:36:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.891 03:36:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.891 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.892 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.893 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:56.894 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.895 03:36:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.896 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.897 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.898 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:56.899 03:36:49 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:56.899 03:36:49 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:56.899 03:36:49 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:56.899 03:36:49 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:56.899 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.900 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.901 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:56.902 03:36:49 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:57.160 03:36:49 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:57.160 03:36:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:57.161 03:36:49 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:57.161 03:36:49 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:57.161 03:36:49 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:57.161 03:36:49 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:57.419 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:57.983 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.983 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.983 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.983 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:57.983 03:36:50 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:57.983 03:36:50 nvme_fdp -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:57.984 03:36:50 nvme_fdp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:57.984 03:36:50 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:57.984 ************************************ 00:09:57.984 START TEST nvme_flexible_data_placement 00:09:57.984 ************************************ 00:09:57.984 03:36:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:58.241 Initializing NVMe Controllers 00:09:58.241 Attaching to 0000:00:13.0 00:09:58.241 Controller supports FDP Attached to 0000:00:13.0 00:09:58.241 Namespace ID: 1 Endurance Group ID: 1 00:09:58.241 Initialization complete. 00:09:58.241 00:09:58.241 ================================== 00:09:58.241 == FDP tests for Namespace: #01 == 00:09:58.241 ================================== 00:09:58.241 00:09:58.241 Get Feature: FDP: 00:09:58.241 ================= 00:09:58.241 Enabled: Yes 00:09:58.241 FDP configuration Index: 0 00:09:58.241 00:09:58.241 FDP configurations log page 00:09:58.241 =========================== 00:09:58.241 Number of FDP configurations: 1 00:09:58.241 Version: 0 00:09:58.241 Size: 112 00:09:58.241 FDP Configuration Descriptor: 0 00:09:58.241 Descriptor Size: 96 00:09:58.241 Reclaim Group Identifier format: 2 00:09:58.241 FDP Volatile Write Cache: Not Present 00:09:58.241 FDP Configuration: Valid 00:09:58.241 Vendor Specific Size: 0 00:09:58.241 Number of Reclaim Groups: 2 00:09:58.241 Number of Recalim Unit Handles: 8 00:09:58.241 Max Placement Identifiers: 128 00:09:58.241 Number of Namespaces Suppprted: 256 00:09:58.241 Reclaim unit Nominal Size: 6000000 bytes 00:09:58.241 Estimated Reclaim Unit Time Limit: Not Reported 00:09:58.241 RUH Desc #000: RUH Type: Initially Isolated 00:09:58.241 RUH Desc #001: RUH Type: Initially Isolated 00:09:58.241 RUH Desc #002: RUH Type: Initially Isolated 00:09:58.241 RUH Desc #003: RUH Type: Initially Isolated 00:09:58.241 RUH Desc #004: RUH Type: Initially Isolated 00:09:58.241 RUH Desc #005: RUH Type: Initially Isolated 00:09:58.241 RUH Desc #006: RUH Type: Initially Isolated 00:09:58.241 RUH Desc #007: RUH Type: Initially Isolated 00:09:58.241 00:09:58.241 FDP reclaim unit handle usage log page 00:09:58.241 ====================================== 00:09:58.241 Number of Reclaim Unit Handles: 8 00:09:58.241 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:58.241 RUH Usage Desc #001: RUH Attributes: Unused 00:09:58.242 RUH Usage Desc #002: RUH Attributes: Unused 00:09:58.242 RUH Usage Desc #003: RUH Attributes: Unused 00:09:58.242 RUH Usage Desc #004: RUH Attributes: Unused 00:09:58.242 RUH Usage Desc #005: RUH Attributes: Unused 00:09:58.242 RUH Usage Desc #006: RUH Attributes: Unused 00:09:58.242 RUH Usage Desc #007: RUH Attributes: Unused 00:09:58.242 00:09:58.242 FDP statistics log page 00:09:58.242 ======================= 00:09:58.242 Host bytes with metadata written: 1200349184 00:09:58.242 Media bytes with metadata written: 1200492544 00:09:58.242 Media bytes erased: 0 00:09:58.242 00:09:58.242 FDP Reclaim unit handle status 00:09:58.242 ============================== 00:09:58.242 Number of RUHS descriptors: 2 00:09:58.242 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000000742 00:09:58.242 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:58.242 00:09:58.242 FDP write on placement id: 0 success 00:09:58.242 00:09:58.242 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:58.242 00:09:58.242 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:58.242 00:09:58.242 Get Feature: FDP Events for Placement handle: #0 00:09:58.242 ======================== 00:09:58.242 Number of FDP Events: 6 00:09:58.242 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:58.242 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:58.242 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:58.242 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:58.242 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:58.242 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:58.242 00:09:58.242 FDP events log page 00:09:58.242 =================== 00:09:58.242 Number of FDP events: 1 00:09:58.242 FDP Event #0: 00:09:58.242 Event Type: RU Not Written to Capacity 00:09:58.242 Placement Identifier: Valid 00:09:58.242 NSID: Valid 00:09:58.242 Location: Valid 00:09:58.242 Placement Identifier: 0 00:09:58.242 Event Timestamp: 4 00:09:58.242 Namespace Identifier: 1 00:09:58.242 Reclaim Group Identifier: 0 00:09:58.242 Reclaim Unit Handle Identifier: 0 00:09:58.242 00:09:58.242 FDP test passed 00:09:58.242 ************************************ 00:09:58.242 END TEST nvme_flexible_data_placement 00:09:58.242 ************************************ 00:09:58.242 00:09:58.242 real 0m0.214s 00:09:58.242 user 0m0.051s 00:09:58.242 sys 0m0.061s 00:09:58.242 03:36:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.242 03:36:50 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:58.242 ************************************ 00:09:58.242 END TEST nvme_fdp 00:09:58.242 ************************************ 00:09:58.242 00:09:58.242 real 0m7.472s 00:09:58.242 user 0m1.022s 00:09:58.242 sys 0m1.329s 00:09:58.242 03:36:50 nvme_fdp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.242 03:36:50 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:58.242 03:36:50 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:58.242 03:36:50 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:58.242 03:36:50 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:09:58.242 03:36:50 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:58.242 03:36:50 -- common/autotest_common.sh@10 -- # set +x 00:09:58.242 ************************************ 00:09:58.242 START TEST nvme_rpc 00:09:58.242 ************************************ 00:09:58.242 03:36:50 nvme_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:58.499 * Looking for test storage... 00:09:58.499 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:58.499 03:36:50 nvme_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:09:58.499 03:36:50 nvme_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:09:58.499 03:36:50 nvme_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:09:58.499 03:36:50 nvme_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:09:58.499 03:36:50 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:58.499 03:36:50 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:58.499 03:36:50 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:58.500 03:36:50 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:09:58.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.500 --rc genhtml_branch_coverage=1 00:09:58.500 --rc genhtml_function_coverage=1 00:09:58.500 --rc genhtml_legend=1 00:09:58.500 --rc geninfo_all_blocks=1 00:09:58.500 --rc geninfo_unexecuted_blocks=1 00:09:58.500 00:09:58.500 ' 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:09:58.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.500 --rc genhtml_branch_coverage=1 00:09:58.500 --rc genhtml_function_coverage=1 00:09:58.500 --rc genhtml_legend=1 00:09:58.500 --rc geninfo_all_blocks=1 00:09:58.500 --rc geninfo_unexecuted_blocks=1 00:09:58.500 00:09:58.500 ' 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:09:58.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.500 --rc genhtml_branch_coverage=1 00:09:58.500 --rc genhtml_function_coverage=1 00:09:58.500 --rc genhtml_legend=1 00:09:58.500 --rc geninfo_all_blocks=1 00:09:58.500 --rc geninfo_unexecuted_blocks=1 00:09:58.500 00:09:58.500 ' 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:09:58.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:58.500 --rc genhtml_branch_coverage=1 00:09:58.500 --rc genhtml_function_coverage=1 00:09:58.500 --rc genhtml_legend=1 00:09:58.500 --rc geninfo_all_blocks=1 00:09:58.500 --rc geninfo_unexecuted_blocks=1 00:09:58.500 00:09:58.500 ' 00:09:58.500 03:36:50 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:58.500 03:36:50 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:09:58.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:58.500 03:36:50 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:58.500 03:36:50 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=66227 00:09:58.500 03:36:50 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:58.500 03:36:50 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:58.500 03:36:50 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 66227 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@831 -- # '[' -z 66227 ']' 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:58.500 03:36:50 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:58.500 [2024-10-01 03:36:51.014490] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:09:58.500 [2024-10-01 03:36:51.014747] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66227 ] 00:09:58.758 [2024-10-01 03:36:51.166836] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:59.015 [2024-10-01 03:36:51.348009] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.015 [2024-10-01 03:36:51.348018] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.580 03:36:51 nvme_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.580 03:36:51 nvme_rpc -- common/autotest_common.sh@864 -- # return 0 00:09:59.580 03:36:51 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:59.837 Nvme0n1 00:09:59.837 03:36:52 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:59.837 03:36:52 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:59.837 request: 00:09:59.837 { 00:09:59.837 "bdev_name": "Nvme0n1", 00:09:59.837 "filename": "non_existing_file", 00:09:59.837 "method": "bdev_nvme_apply_firmware", 00:09:59.837 "req_id": 1 00:09:59.837 } 00:09:59.837 Got JSON-RPC error response 00:09:59.837 response: 00:09:59.837 { 00:09:59.837 "code": -32603, 00:09:59.837 "message": "open file failed." 00:09:59.837 } 00:10:00.095 03:36:52 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:00.095 03:36:52 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:00.095 03:36:52 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:00.095 03:36:52 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:00.095 03:36:52 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 66227 00:10:00.095 03:36:52 nvme_rpc -- common/autotest_common.sh@950 -- # '[' -z 66227 ']' 00:10:00.095 03:36:52 nvme_rpc -- common/autotest_common.sh@954 -- # kill -0 66227 00:10:00.095 03:36:52 nvme_rpc -- common/autotest_common.sh@955 -- # uname 00:10:00.095 03:36:52 nvme_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:00.095 03:36:52 nvme_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66227 00:10:00.095 killing process with pid 66227 00:10:00.095 03:36:52 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:00.095 03:36:52 nvme_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:00.095 03:36:52 nvme_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66227' 00:10:00.095 03:36:52 nvme_rpc -- common/autotest_common.sh@969 -- # kill 66227 00:10:00.095 03:36:52 nvme_rpc -- common/autotest_common.sh@974 -- # wait 66227 00:10:02.018 ************************************ 00:10:02.018 END TEST nvme_rpc 00:10:02.018 ************************************ 00:10:02.018 00:10:02.018 real 0m3.376s 00:10:02.018 user 0m6.284s 00:10:02.018 sys 0m0.480s 00:10:02.018 03:36:54 nvme_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:02.018 03:36:54 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:02.018 03:36:54 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:02.018 03:36:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:02.018 03:36:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:02.018 03:36:54 -- common/autotest_common.sh@10 -- # set +x 00:10:02.018 ************************************ 00:10:02.018 START TEST nvme_rpc_timeouts 00:10:02.018 ************************************ 00:10:02.018 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:02.018 * Looking for test storage... 00:10:02.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:02.018 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:02.018 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lcov --version 00:10:02.018 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:02.018 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:02.018 03:36:54 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:02.018 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:02.018 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.018 --rc genhtml_branch_coverage=1 00:10:02.018 --rc genhtml_function_coverage=1 00:10:02.018 --rc genhtml_legend=1 00:10:02.018 --rc geninfo_all_blocks=1 00:10:02.018 --rc geninfo_unexecuted_blocks=1 00:10:02.018 00:10:02.018 ' 00:10:02.018 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.018 --rc genhtml_branch_coverage=1 00:10:02.018 --rc genhtml_function_coverage=1 00:10:02.018 --rc genhtml_legend=1 00:10:02.018 --rc geninfo_all_blocks=1 00:10:02.018 --rc geninfo_unexecuted_blocks=1 00:10:02.018 00:10:02.018 ' 00:10:02.018 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.018 --rc genhtml_branch_coverage=1 00:10:02.018 --rc genhtml_function_coverage=1 00:10:02.018 --rc genhtml_legend=1 00:10:02.018 --rc geninfo_all_blocks=1 00:10:02.018 --rc geninfo_unexecuted_blocks=1 00:10:02.018 00:10:02.018 ' 00:10:02.018 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:02.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:02.018 --rc genhtml_branch_coverage=1 00:10:02.018 --rc genhtml_function_coverage=1 00:10:02.018 --rc genhtml_legend=1 00:10:02.018 --rc geninfo_all_blocks=1 00:10:02.018 --rc geninfo_unexecuted_blocks=1 00:10:02.018 00:10:02.018 ' 00:10:02.018 03:36:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:02.018 03:36:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_66292 00:10:02.018 03:36:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_66292 00:10:02.018 03:36:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=66324 00:10:02.018 03:36:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:02.018 03:36:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 66324 00:10:02.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:02.019 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@831 -- # '[' -z 66324 ']' 00:10:02.019 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:02.019 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:02.019 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:02.019 03:36:54 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:02.019 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:02.019 03:36:54 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:02.019 [2024-10-01 03:36:54.385827] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:10:02.019 [2024-10-01 03:36:54.385964] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66324 ] 00:10:02.019 [2024-10-01 03:36:54.535325] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:02.277 [2024-10-01 03:36:54.714767] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:10:02.277 [2024-10-01 03:36:54.714822] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.842 Checking default timeout settings: 00:10:02.842 03:36:55 nvme_rpc_timeouts -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:02.842 03:36:55 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # return 0 00:10:02.842 03:36:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:10:02.842 03:36:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:03.099 Making settings changes with rpc: 00:10:03.099 03:36:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:10:03.099 03:36:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:10:03.357 03:36:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:10:03.357 Check default vs. modified settings: 00:10:03.357 03:36:55 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_66292 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_66292 00:10:03.616 Setting action_on_timeout is changed as expected. 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_66292 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_66292 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:03.616 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:03.874 Setting timeout_us is changed as expected. 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_66292 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_66292 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:10:03.874 Setting timeout_admin_us is changed as expected. 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_66292 /tmp/settings_modified_66292 00:10:03.874 03:36:56 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 66324 00:10:03.874 03:36:56 nvme_rpc_timeouts -- common/autotest_common.sh@950 -- # '[' -z 66324 ']' 00:10:03.874 03:36:56 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # kill -0 66324 00:10:03.875 03:36:56 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # uname 00:10:03.875 03:36:56 nvme_rpc_timeouts -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.875 03:36:56 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66324 00:10:03.875 killing process with pid 66324 00:10:03.875 03:36:56 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.875 03:36:56 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.875 03:36:56 nvme_rpc_timeouts -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66324' 00:10:03.875 03:36:56 nvme_rpc_timeouts -- common/autotest_common.sh@969 -- # kill 66324 00:10:03.875 03:36:56 nvme_rpc_timeouts -- common/autotest_common.sh@974 -- # wait 66324 00:10:05.247 RPC TIMEOUT SETTING TEST PASSED. 00:10:05.247 03:36:57 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:10:05.247 ************************************ 00:10:05.247 END TEST nvme_rpc_timeouts 00:10:05.247 ************************************ 00:10:05.247 00:10:05.247 real 0m3.467s 00:10:05.247 user 0m6.603s 00:10:05.247 sys 0m0.483s 00:10:05.247 03:36:57 nvme_rpc_timeouts -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:05.247 03:36:57 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:05.247 03:36:57 -- spdk/autotest.sh@239 -- # uname -s 00:10:05.247 03:36:57 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:10:05.247 03:36:57 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:05.247 03:36:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:10:05.247 03:36:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:05.247 03:36:57 -- common/autotest_common.sh@10 -- # set +x 00:10:05.247 ************************************ 00:10:05.247 START TEST sw_hotplug 00:10:05.247 ************************************ 00:10:05.247 03:36:57 sw_hotplug -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:10:05.247 * Looking for test storage... 00:10:05.247 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:05.247 03:36:57 sw_hotplug -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:10:05.247 03:36:57 sw_hotplug -- common/autotest_common.sh@1681 -- # lcov --version 00:10:05.247 03:36:57 sw_hotplug -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:10:05.505 03:36:57 sw_hotplug -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:05.505 03:36:57 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:10:05.505 03:36:57 sw_hotplug -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:05.505 03:36:57 sw_hotplug -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:10:05.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.505 --rc genhtml_branch_coverage=1 00:10:05.505 --rc genhtml_function_coverage=1 00:10:05.505 --rc genhtml_legend=1 00:10:05.505 --rc geninfo_all_blocks=1 00:10:05.505 --rc geninfo_unexecuted_blocks=1 00:10:05.505 00:10:05.505 ' 00:10:05.505 03:36:57 sw_hotplug -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:10:05.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.505 --rc genhtml_branch_coverage=1 00:10:05.505 --rc genhtml_function_coverage=1 00:10:05.505 --rc genhtml_legend=1 00:10:05.505 --rc geninfo_all_blocks=1 00:10:05.505 --rc geninfo_unexecuted_blocks=1 00:10:05.505 00:10:05.505 ' 00:10:05.505 03:36:57 sw_hotplug -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:10:05.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.505 --rc genhtml_branch_coverage=1 00:10:05.505 --rc genhtml_function_coverage=1 00:10:05.505 --rc genhtml_legend=1 00:10:05.505 --rc geninfo_all_blocks=1 00:10:05.505 --rc geninfo_unexecuted_blocks=1 00:10:05.505 00:10:05.505 ' 00:10:05.505 03:36:57 sw_hotplug -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:10:05.505 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:05.505 --rc genhtml_branch_coverage=1 00:10:05.505 --rc genhtml_function_coverage=1 00:10:05.505 --rc genhtml_legend=1 00:10:05.505 --rc geninfo_all_blocks=1 00:10:05.505 --rc geninfo_unexecuted_blocks=1 00:10:05.505 00:10:05.505 ' 00:10:05.505 03:36:57 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:05.763 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:05.763 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:05.763 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:05.763 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:05.763 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:10:05.763 03:36:58 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:10:05.763 03:36:58 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:10:05.763 03:36:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:10:05.763 03:36:58 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@233 -- # local class 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@18 -- # local i 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:05.763 03:36:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:10:05.764 03:36:58 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:05.764 03:36:58 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:10:05.764 03:36:58 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:10:05.764 03:36:58 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:06.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:06.328 Waiting for block devices as requested 00:10:06.328 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:06.328 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:06.593 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:06.593 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:11.849 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:11.849 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:10:11.849 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:11.849 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:10:12.145 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:12.145 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:10:12.145 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:10:12.403 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.403 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:12.403 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:10:12.403 03:37:04 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=67178 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:10:12.661 03:37:04 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:10:12.661 03:37:04 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:10:12.661 03:37:04 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:10:12.661 03:37:04 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:10:12.661 03:37:04 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:12.661 03:37:04 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:12.661 Initializing NVMe Controllers 00:10:12.661 Attaching to 0000:00:10.0 00:10:12.661 Attaching to 0000:00:11.0 00:10:12.661 Attached to 0000:00:10.0 00:10:12.661 Attached to 0000:00:11.0 00:10:12.661 Initialization complete. Starting I/O... 00:10:12.661 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:10:12.661 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:10:12.661 00:10:14.034 QEMU NVMe Ctrl (12340 ): 2670 I/Os completed (+2670) 00:10:14.034 QEMU NVMe Ctrl (12341 ): 2617 I/Os completed (+2617) 00:10:14.034 00:10:14.971 QEMU NVMe Ctrl (12340 ): 6053 I/Os completed (+3383) 00:10:14.971 QEMU NVMe Ctrl (12341 ): 5799 I/Os completed (+3182) 00:10:14.971 00:10:15.906 QEMU NVMe Ctrl (12340 ): 9975 I/Os completed (+3922) 00:10:15.906 QEMU NVMe Ctrl (12341 ): 9712 I/Os completed (+3913) 00:10:15.906 00:10:16.840 QEMU NVMe Ctrl (12340 ): 13763 I/Os completed (+3788) 00:10:16.840 QEMU NVMe Ctrl (12341 ): 13496 I/Os completed (+3784) 00:10:16.840 00:10:17.774 QEMU NVMe Ctrl (12340 ): 17491 I/Os completed (+3728) 00:10:17.774 QEMU NVMe Ctrl (12341 ): 16776 I/Os completed (+3280) 00:10:17.774 00:10:18.715 03:37:10 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:18.715 03:37:10 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:18.715 03:37:10 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:18.715 [2024-10-01 03:37:11.002072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:18.715 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:18.715 [2024-10-01 03:37:11.003233] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.715 [2024-10-01 03:37:11.003289] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.715 [2024-10-01 03:37:11.003308] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.715 [2024-10-01 03:37:11.003327] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.715 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:18.715 [2024-10-01 03:37:11.005282] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.715 [2024-10-01 03:37:11.005380] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.715 [2024-10-01 03:37:11.005452] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.715 [2024-10-01 03:37:11.005483] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.715 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:18.715 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:18.715 [2024-10-01 03:37:11.030761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:18.715 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:18.715 [2024-10-01 03:37:11.031836] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.716 [2024-10-01 03:37:11.031873] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.716 [2024-10-01 03:37:11.031893] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.716 [2024-10-01 03:37:11.031910] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.716 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:18.716 [2024-10-01 03:37:11.033540] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.716 [2024-10-01 03:37:11.033571] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.716 [2024-10-01 03:37:11.033587] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.716 [2024-10-01 03:37:11.033601] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:18.716 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:18.716 EAL: Scan for (pci) bus failed. 00:10:18.716 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:18.716 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:18.716 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:18.716 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:18.716 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:18.716 00:10:18.716 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:18.716 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:18.716 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:18.716 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:18.716 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:18.716 Attaching to 0000:00:10.0 00:10:18.716 Attached to 0000:00:10.0 00:10:18.974 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:18.974 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:18.974 03:37:11 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:18.974 Attaching to 0000:00:11.0 00:10:18.974 Attached to 0000:00:11.0 00:10:19.915 QEMU NVMe Ctrl (12340 ): 3532 I/Os completed (+3532) 00:10:19.915 QEMU NVMe Ctrl (12341 ): 3163 I/Os completed (+3163) 00:10:19.915 00:10:20.853 QEMU NVMe Ctrl (12340 ): 7530 I/Os completed (+3998) 00:10:20.853 QEMU NVMe Ctrl (12341 ): 7162 I/Os completed (+3999) 00:10:20.853 00:10:21.786 QEMU NVMe Ctrl (12340 ): 12146 I/Os completed (+4616) 00:10:21.786 QEMU NVMe Ctrl (12341 ): 12614 I/Os completed (+5452) 00:10:21.786 00:10:22.718 QEMU NVMe Ctrl (12340 ): 15285 I/Os completed (+3139) 00:10:22.718 QEMU NVMe Ctrl (12341 ): 15682 I/Os completed (+3068) 00:10:22.718 00:10:23.657 QEMU NVMe Ctrl (12340 ): 18446 I/Os completed (+3161) 00:10:23.657 QEMU NVMe Ctrl (12341 ): 18920 I/Os completed (+3238) 00:10:23.657 00:10:25.035 QEMU NVMe Ctrl (12340 ): 21989 I/Os completed (+3543) 00:10:25.035 QEMU NVMe Ctrl (12341 ): 22771 I/Os completed (+3851) 00:10:25.035 00:10:25.974 QEMU NVMe Ctrl (12340 ): 25437 I/Os completed (+3448) 00:10:25.974 QEMU NVMe Ctrl (12341 ): 26222 I/Os completed (+3451) 00:10:25.974 00:10:26.909 QEMU NVMe Ctrl (12340 ): 29092 I/Os completed (+3655) 00:10:26.909 QEMU NVMe Ctrl (12341 ): 29908 I/Os completed (+3686) 00:10:26.909 00:10:27.845 QEMU NVMe Ctrl (12340 ): 32681 I/Os completed (+3589) 00:10:27.845 QEMU NVMe Ctrl (12341 ): 33503 I/Os completed (+3595) 00:10:27.845 00:10:28.779 QEMU NVMe Ctrl (12340 ): 36207 I/Os completed (+3526) 00:10:28.779 QEMU NVMe Ctrl (12341 ): 37050 I/Os completed (+3547) 00:10:28.779 00:10:29.711 QEMU NVMe Ctrl (12340 ): 39526 I/Os completed (+3319) 00:10:29.711 QEMU NVMe Ctrl (12341 ): 40314 I/Os completed (+3264) 00:10:29.711 00:10:30.645 QEMU NVMe Ctrl (12340 ): 43108 I/Os completed (+3582) 00:10:30.645 QEMU NVMe Ctrl (12341 ): 43934 I/Os completed (+3620) 00:10:30.645 00:10:30.903 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:30.903 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:30.903 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:30.903 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:30.903 [2024-10-01 03:37:23.297244] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:30.903 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:30.903 [2024-10-01 03:37:23.298236] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.298278] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.298295] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.298311] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:30.903 [2024-10-01 03:37:23.299927] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.299966] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.299978] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.299992] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:30.903 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:30.903 [2024-10-01 03:37:23.321775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:30.903 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:30.903 [2024-10-01 03:37:23.322671] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.322706] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.322727] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.322745] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:30.903 [2024-10-01 03:37:23.324119] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.324151] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.324163] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 [2024-10-01 03:37:23.324176] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:30.903 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:30.903 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:30.903 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:30.903 EAL: Scan for (pci) bus failed. 00:10:30.903 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:30.903 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:30.903 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:31.160 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:31.160 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:31.160 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:31.160 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:31.160 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:31.160 Attaching to 0000:00:10.0 00:10:31.160 Attached to 0000:00:10.0 00:10:31.160 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:31.160 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:31.160 03:37:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:31.160 Attaching to 0000:00:11.0 00:10:31.160 Attached to 0000:00:11.0 00:10:31.725 QEMU NVMe Ctrl (12340 ): 2500 I/Os completed (+2500) 00:10:31.725 QEMU NVMe Ctrl (12341 ): 2172 I/Os completed (+2172) 00:10:31.725 00:10:32.683 QEMU NVMe Ctrl (12340 ): 7345 I/Os completed (+4845) 00:10:32.683 QEMU NVMe Ctrl (12341 ): 6959 I/Os completed (+4787) 00:10:32.683 00:10:34.053 QEMU NVMe Ctrl (12340 ): 11689 I/Os completed (+4344) 00:10:34.053 QEMU NVMe Ctrl (12341 ): 11798 I/Os completed (+4839) 00:10:34.053 00:10:34.987 QEMU NVMe Ctrl (12340 ): 15221 I/Os completed (+3532) 00:10:34.987 QEMU NVMe Ctrl (12341 ): 15363 I/Os completed (+3565) 00:10:34.987 00:10:35.923 QEMU NVMe Ctrl (12340 ): 18822 I/Os completed (+3601) 00:10:35.923 QEMU NVMe Ctrl (12341 ): 18967 I/Os completed (+3604) 00:10:35.923 00:10:36.856 QEMU NVMe Ctrl (12340 ): 22505 I/Os completed (+3683) 00:10:36.856 QEMU NVMe Ctrl (12341 ): 22599 I/Os completed (+3632) 00:10:36.856 00:10:37.795 QEMU NVMe Ctrl (12340 ): 26028 I/Os completed (+3523) 00:10:37.795 QEMU NVMe Ctrl (12341 ): 26220 I/Os completed (+3621) 00:10:37.795 00:10:38.740 QEMU NVMe Ctrl (12340 ): 29320 I/Os completed (+3292) 00:10:38.740 QEMU NVMe Ctrl (12341 ): 29519 I/Os completed (+3299) 00:10:38.740 00:10:39.680 QEMU NVMe Ctrl (12340 ): 32308 I/Os completed (+2988) 00:10:39.680 QEMU NVMe Ctrl (12341 ): 32504 I/Os completed (+2985) 00:10:39.680 00:10:41.052 QEMU NVMe Ctrl (12340 ): 35965 I/Os completed (+3657) 00:10:41.052 QEMU NVMe Ctrl (12341 ): 36073 I/Os completed (+3569) 00:10:41.052 00:10:41.987 QEMU NVMe Ctrl (12340 ): 39575 I/Os completed (+3610) 00:10:41.987 QEMU NVMe Ctrl (12341 ): 39638 I/Os completed (+3565) 00:10:41.987 00:10:42.922 QEMU NVMe Ctrl (12340 ): 43222 I/Os completed (+3647) 00:10:42.922 QEMU NVMe Ctrl (12341 ): 43299 I/Os completed (+3661) 00:10:42.922 00:10:43.179 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:43.179 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:43.179 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:43.179 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:43.179 [2024-10-01 03:37:35.572680] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:10:43.179 Controller removed: QEMU NVMe Ctrl (12340 ) 00:10:43.179 [2024-10-01 03:37:35.573676] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.573722] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.573738] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.573752] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:43.179 [2024-10-01 03:37:35.575445] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.575485] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.575497] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.575509] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:43.179 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:43.179 [2024-10-01 03:37:35.597070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:10:43.179 Controller removed: QEMU NVMe Ctrl (12341 ) 00:10:43.179 [2024-10-01 03:37:35.597932] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.597968] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.597985] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.598027] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:43.179 [2024-10-01 03:37:35.599417] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.599445] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.599461] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 [2024-10-01 03:37:35.599473] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:43.179 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:10:43.179 EAL: Scan for (pci) bus failed. 00:10:43.179 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:10:43.179 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:43.179 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:43.179 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:43.179 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:43.436 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:43.436 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:43.436 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:43.437 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:43.437 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:43.437 Attaching to 0000:00:10.0 00:10:43.437 Attached to 0000:00:10.0 00:10:43.437 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:43.437 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:43.437 03:37:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:43.437 Attaching to 0000:00:11.0 00:10:43.437 Attached to 0000:00:11.0 00:10:43.437 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:43.437 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:43.437 [2024-10-01 03:37:35.847233] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:55.651 03:37:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:55.651 03:37:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:55.651 03:37:47 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.84 00:10:55.651 03:37:47 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.84 00:10:55.651 03:37:47 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:10:55.651 03:37:47 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.84 00:10:55.651 03:37:47 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.84 2 00:10:55.651 remove_attach_helper took 42.84s to complete (handling 2 nvme drive(s)) 03:37:47 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:11:02.236 03:37:53 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 67178 00:11:02.236 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (67178) - No such process 00:11:02.236 03:37:53 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 67178 00:11:02.236 03:37:53 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:11:02.236 03:37:53 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:11:02.236 03:37:53 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:11:02.236 03:37:53 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=67728 00:11:02.236 03:37:53 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:11:02.236 03:37:53 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 67728 00:11:02.236 03:37:53 sw_hotplug -- common/autotest_common.sh@831 -- # '[' -z 67728 ']' 00:11:02.236 03:37:53 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:02.236 03:37:53 sw_hotplug -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.236 03:37:53 sw_hotplug -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:02.236 03:37:53 sw_hotplug -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.236 03:37:53 sw_hotplug -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:02.236 03:37:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.236 [2024-10-01 03:37:53.947855] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:11:02.236 [2024-10-01 03:37:53.948031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67728 ] 00:11:02.236 [2024-10-01 03:37:54.101214] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.236 [2024-10-01 03:37:54.368454] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:11:02.807 03:37:55 sw_hotplug -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:02.807 03:37:55 sw_hotplug -- common/autotest_common.sh@864 -- # return 0 00:11:02.807 03:37:55 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:02.807 03:37:55 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.807 03:37:55 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:02.807 03:37:55 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.807 03:37:55 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:11:02.807 03:37:55 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:02.807 03:37:55 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:02.807 03:37:55 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:02.807 03:37:55 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:02.807 03:37:55 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:02.807 03:37:55 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:02.807 03:37:55 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:02.807 03:37:55 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:02.807 03:37:55 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:02.807 03:37:55 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:02.807 03:37:55 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:02.807 03:37:55 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:09.391 03:38:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.391 03:38:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:09.391 03:38:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:09.391 [2024-10-01 03:38:01.265293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:09.391 [2024-10-01 03:38:01.266588] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.391 [2024-10-01 03:38:01.266626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.391 [2024-10-01 03:38:01.266639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.391 [2024-10-01 03:38:01.266662] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.391 [2024-10-01 03:38:01.266670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.391 [2024-10-01 03:38:01.266679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.391 [2024-10-01 03:38:01.266687] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.391 [2024-10-01 03:38:01.266696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.391 [2024-10-01 03:38:01.266703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.391 [2024-10-01 03:38:01.266715] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.391 [2024-10-01 03:38:01.266722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.391 [2024-10-01 03:38:01.266730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.391 [2024-10-01 03:38:01.665286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:09.391 [2024-10-01 03:38:01.666590] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.391 [2024-10-01 03:38:01.666622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.391 [2024-10-01 03:38:01.666633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.391 [2024-10-01 03:38:01.666645] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.391 [2024-10-01 03:38:01.666655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.391 [2024-10-01 03:38:01.666661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.391 [2024-10-01 03:38:01.666671] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.391 [2024-10-01 03:38:01.666677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.391 [2024-10-01 03:38:01.666685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.391 [2024-10-01 03:38:01.666693] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:09.391 [2024-10-01 03:38:01.666701] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:09.391 [2024-10-01 03:38:01.666707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:09.391 03:38:01 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.391 03:38:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:09.391 03:38:01 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:09.391 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:09.652 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:09.652 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:09.653 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:09.653 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:09.653 03:38:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:09.653 03:38:02 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:09.653 03:38:02 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:09.653 03:38:02 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.961 03:38:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.961 03:38:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.961 03:38:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:21.961 03:38:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.961 03:38:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:21.961 [2024-10-01 03:38:14.165504] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:21.961 [2024-10-01 03:38:14.166744] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.961 [2024-10-01 03:38:14.166770] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.961 [2024-10-01 03:38:14.166782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.961 [2024-10-01 03:38:14.166804] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.961 [2024-10-01 03:38:14.166812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.961 [2024-10-01 03:38:14.166820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.961 [2024-10-01 03:38:14.166828] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.961 [2024-10-01 03:38:14.166837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.961 [2024-10-01 03:38:14.166844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.961 [2024-10-01 03:38:14.166852] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:21.961 [2024-10-01 03:38:14.166859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:21.961 [2024-10-01 03:38:14.166867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:21.961 03:38:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:21.961 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:22.222 [2024-10-01 03:38:14.565501] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:22.222 [2024-10-01 03:38:14.566714] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.223 [2024-10-01 03:38:14.566743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.223 [2024-10-01 03:38:14.566757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.223 [2024-10-01 03:38:14.566768] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.223 [2024-10-01 03:38:14.566777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.223 [2024-10-01 03:38:14.566784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.223 [2024-10-01 03:38:14.566794] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.223 [2024-10-01 03:38:14.566800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.223 [2024-10-01 03:38:14.566809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.223 [2024-10-01 03:38:14.566816] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:22.223 [2024-10-01 03:38:14.566824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:22.223 [2024-10-01 03:38:14.566831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:22.223 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:22.223 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:22.223 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:22.223 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:22.223 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:22.223 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:22.223 03:38:14 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.223 03:38:14 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:22.223 03:38:14 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.223 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:22.223 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:22.484 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:22.484 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:22.484 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:22.484 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:22.484 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.484 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:22.484 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:22.484 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:22.484 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:22.484 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:22.484 03:38:14 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:34.725 03:38:26 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:34.725 03:38:26 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.725 03:38:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.725 03:38:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.725 03:38:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:34.725 [2024-10-01 03:38:27.065721] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:34.725 [2024-10-01 03:38:27.067195] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.725 [2024-10-01 03:38:27.067229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.725 [2024-10-01 03:38:27.067240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.725 [2024-10-01 03:38:27.067260] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.725 [2024-10-01 03:38:27.067268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.725 [2024-10-01 03:38:27.067278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.725 [2024-10-01 03:38:27.067286] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.725 [2024-10-01 03:38:27.067294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.725 [2024-10-01 03:38:27.067301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.725 [2024-10-01 03:38:27.067310] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.725 [2024-10-01 03:38:27.067317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.725 [2024-10-01 03:38:27.067325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:34.725 03:38:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.725 03:38:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:34.725 03:38:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:34.725 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:34.988 [2024-10-01 03:38:27.465715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:34.988 [2024-10-01 03:38:27.466920] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.988 [2024-10-01 03:38:27.466948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.988 [2024-10-01 03:38:27.466960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.988 [2024-10-01 03:38:27.466972] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.988 [2024-10-01 03:38:27.466981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.988 [2024-10-01 03:38:27.466988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.988 [2024-10-01 03:38:27.466997] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.988 [2024-10-01 03:38:27.467016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.988 [2024-10-01 03:38:27.467027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:34.988 [2024-10-01 03:38:27.467035] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:34.988 [2024-10-01 03:38:27.467043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:34.988 [2024-10-01 03:38:27.467050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:35.249 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:35.249 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:35.249 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:35.249 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:35.249 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:35.249 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:35.249 03:38:27 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.249 03:38:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:35.249 03:38:27 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.249 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:35.249 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:35.249 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.249 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.249 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:35.510 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:35.510 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.510 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:35.510 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:35.510 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:35.510 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:35.510 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:35.510 03:38:27 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.79 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.79 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.79 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.79 2 00:11:47.739 remove_attach_helper took 44.79s to complete (handling 2 nvme drive(s)) 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:47.739 03:38:39 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:47.739 03:38:39 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:54.343 03:38:45 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:54.343 03:38:45 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.343 03:38:45 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.343 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:54.343 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:54.343 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:54.343 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.343 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.343 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.343 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.343 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.343 03:38:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.343 03:38:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.343 03:38:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.343 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:54.343 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:54.343 [2024-10-01 03:38:46.081691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:11:54.343 [2024-10-01 03:38:46.082748] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.343 [2024-10-01 03:38:46.082780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.343 [2024-10-01 03:38:46.082793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.343 [2024-10-01 03:38:46.082814] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.343 [2024-10-01 03:38:46.082822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.343 [2024-10-01 03:38:46.082832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.344 [2024-10-01 03:38:46.082840] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.344 [2024-10-01 03:38:46.082848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.344 [2024-10-01 03:38:46.082855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.344 [2024-10-01 03:38:46.082864] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.344 [2024-10-01 03:38:46.082871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.344 [2024-10-01 03:38:46.082881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.344 [2024-10-01 03:38:46.481683] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:11:54.344 [2024-10-01 03:38:46.482846] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.344 [2024-10-01 03:38:46.482873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.344 [2024-10-01 03:38:46.482885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.344 [2024-10-01 03:38:46.482896] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.344 [2024-10-01 03:38:46.482906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.344 [2024-10-01 03:38:46.482913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.344 [2024-10-01 03:38:46.482923] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.344 [2024-10-01 03:38:46.482930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.344 [2024-10-01 03:38:46.482940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.344 [2024-10-01 03:38:46.482948] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:54.344 [2024-10-01 03:38:46.482956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:54.344 [2024-10-01 03:38:46.482963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:54.344 03:38:46 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.344 03:38:46 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:54.344 03:38:46 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:54.344 03:38:46 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.578 03:38:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.578 03:38:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.578 03:38:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:06.578 03:38:58 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.578 03:38:58 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:06.578 03:38:58 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:06.578 03:38:58 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:06.578 [2024-10-01 03:38:58.981883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:06.578 [2024-10-01 03:38:58.983085] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.578 [2024-10-01 03:38:58.983120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.578 [2024-10-01 03:38:58.983132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.578 [2024-10-01 03:38:58.983153] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.578 [2024-10-01 03:38:58.983161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.578 [2024-10-01 03:38:58.983170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.578 [2024-10-01 03:38:58.983179] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.578 [2024-10-01 03:38:58.983187] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.578 [2024-10-01 03:38:58.983194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.578 [2024-10-01 03:38:58.983203] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.578 [2024-10-01 03:38:58.983210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.578 [2024-10-01 03:38:58.983218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.840 [2024-10-01 03:38:59.381875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:06.840 [2024-10-01 03:38:59.383134] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.840 [2024-10-01 03:38:59.383161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.840 [2024-10-01 03:38:59.383173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.840 [2024-10-01 03:38:59.383184] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.840 [2024-10-01 03:38:59.383195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.840 [2024-10-01 03:38:59.383202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.840 [2024-10-01 03:38:59.383211] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.840 [2024-10-01 03:38:59.383217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.840 [2024-10-01 03:38:59.383225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:06.840 [2024-10-01 03:38:59.383232] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:06.840 [2024-10-01 03:38:59.383240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:06.840 [2024-10-01 03:38:59.383247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:07.101 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:07.101 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:07.101 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:07.101 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:07.101 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:07.101 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:07.101 03:38:59 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.101 03:38:59 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:07.101 03:38:59 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.101 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:07.101 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:07.101 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.101 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.101 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:07.362 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:07.362 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.362 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:07.362 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:07.362 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:07.362 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:07.362 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:07.362 03:38:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.587 03:39:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.587 03:39:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.587 03:39:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:19.587 03:39:11 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.587 03:39:11 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:19.587 [2024-10-01 03:39:11.882219] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0] in failed state. 00:12:19.587 [2024-10-01 03:39:11.883278] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.587 [2024-10-01 03:39:11.883314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.587 [2024-10-01 03:39:11.883326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.587 [2024-10-01 03:39:11.883349] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.587 [2024-10-01 03:39:11.883357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.587 [2024-10-01 03:39:11.883367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.587 [2024-10-01 03:39:11.883375] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.587 [2024-10-01 03:39:11.883386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.587 [2024-10-01 03:39:11.883393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.587 [2024-10-01 03:39:11.883402] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.587 [2024-10-01 03:39:11.883409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.587 [2024-10-01 03:39:11.883420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.587 03:39:11 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:19.587 03:39:11 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:19.847 [2024-10-01 03:39:12.282220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0] in failed state. 00:12:19.847 [2024-10-01 03:39:12.283228] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.847 [2024-10-01 03:39:12.283257] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.847 [2024-10-01 03:39:12.283269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.847 [2024-10-01 03:39:12.283284] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.847 [2024-10-01 03:39:12.283292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.847 [2024-10-01 03:39:12.283299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.847 [2024-10-01 03:39:12.283308] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.847 [2024-10-01 03:39:12.283315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.847 [2024-10-01 03:39:12.283324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:19.847 [2024-10-01 03:39:12.283332] nvme_pcie_common.c: 748:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:19.847 [2024-10-01 03:39:12.283342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:19.847 [2024-10-01 03:39:12.283349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:20.107 03:39:12 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.107 03:39:12 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:20.107 03:39:12 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:20.107 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:20.365 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:20.365 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:20.365 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:20.365 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:20.365 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:20.365 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:20.365 03:39:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:32.699 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:32.699 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:32.699 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:32.699 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:32.699 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:32.699 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.699 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:32.699 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@717 -- # time=44.79 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@718 -- # echo 44.79 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:32.699 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.79 00:12:32.699 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.79 2 00:12:32.699 remove_attach_helper took 44.79s to complete (handling 2 nvme drive(s)) 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:12:32.699 03:39:24 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 67728 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@950 -- # '[' -z 67728 ']' 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@954 -- # kill -0 67728 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@955 -- # uname 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67728 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:32.699 killing process with pid 67728 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67728' 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@969 -- # kill 67728 00:12:32.699 03:39:24 sw_hotplug -- common/autotest_common.sh@974 -- # wait 67728 00:12:34.073 03:39:26 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:12:34.073 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:12:34.331 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:34.331 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:12:34.589 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:12:34.589 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:12:34.589 00:12:34.589 real 2m29.346s 00:12:34.589 user 1m52.023s 00:12:34.589 sys 0m16.133s 00:12:34.589 03:39:27 sw_hotplug -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.589 ************************************ 00:12:34.589 END TEST sw_hotplug 00:12:34.589 ************************************ 00:12:34.589 03:39:27 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.589 03:39:27 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:12:34.589 03:39:27 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:34.589 03:39:27 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:34.589 03:39:27 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.589 03:39:27 -- common/autotest_common.sh@10 -- # set +x 00:12:34.589 ************************************ 00:12:34.589 START TEST nvme_xnvme 00:12:34.589 ************************************ 00:12:34.589 03:39:27 nvme_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:12:34.589 * Looking for test storage... 00:12:34.589 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:12:34.589 03:39:27 nvme_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:12:34.589 03:39:27 nvme_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:12:34.589 03:39:27 nvme_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:12:34.849 03:39:27 nvme_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:34.849 03:39:27 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:12:34.849 03:39:27 nvme_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:34.849 03:39:27 nvme_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:12:34.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.849 --rc genhtml_branch_coverage=1 00:12:34.849 --rc genhtml_function_coverage=1 00:12:34.849 --rc genhtml_legend=1 00:12:34.849 --rc geninfo_all_blocks=1 00:12:34.849 --rc geninfo_unexecuted_blocks=1 00:12:34.849 00:12:34.849 ' 00:12:34.850 03:39:27 nvme_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:12:34.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.850 --rc genhtml_branch_coverage=1 00:12:34.850 --rc genhtml_function_coverage=1 00:12:34.850 --rc genhtml_legend=1 00:12:34.850 --rc geninfo_all_blocks=1 00:12:34.850 --rc geninfo_unexecuted_blocks=1 00:12:34.850 00:12:34.850 ' 00:12:34.850 03:39:27 nvme_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:12:34.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.850 --rc genhtml_branch_coverage=1 00:12:34.850 --rc genhtml_function_coverage=1 00:12:34.850 --rc genhtml_legend=1 00:12:34.850 --rc geninfo_all_blocks=1 00:12:34.850 --rc geninfo_unexecuted_blocks=1 00:12:34.850 00:12:34.850 ' 00:12:34.850 03:39:27 nvme_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:12:34.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:34.850 --rc genhtml_branch_coverage=1 00:12:34.850 --rc genhtml_function_coverage=1 00:12:34.850 --rc genhtml_legend=1 00:12:34.850 --rc geninfo_all_blocks=1 00:12:34.850 --rc geninfo_unexecuted_blocks=1 00:12:34.850 00:12:34.850 ' 00:12:34.850 03:39:27 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:34.850 03:39:27 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:12:34.850 03:39:27 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:34.850 03:39:27 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:34.850 03:39:27 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:34.850 03:39:27 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.850 03:39:27 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.850 03:39:27 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.850 03:39:27 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:12:34.850 03:39:27 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:34.850 03:39:27 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:12:34.850 03:39:27 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:12:34.850 03:39:27 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.850 03:39:27 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:34.850 ************************************ 00:12:34.850 START TEST xnvme_to_malloc_dd_copy 00:12:34.850 ************************************ 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1125 -- # malloc_to_xnvme_copy 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:34.850 03:39:27 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:34.850 { 00:12:34.850 "subsystems": [ 00:12:34.850 { 00:12:34.850 "subsystem": "bdev", 00:12:34.850 "config": [ 00:12:34.850 { 00:12:34.850 "params": { 00:12:34.850 "block_size": 512, 00:12:34.850 "num_blocks": 2097152, 00:12:34.850 "name": "malloc0" 00:12:34.850 }, 00:12:34.850 "method": "bdev_malloc_create" 00:12:34.850 }, 00:12:34.850 { 00:12:34.850 "params": { 00:12:34.850 "io_mechanism": "libaio", 00:12:34.850 "filename": "/dev/nullb0", 00:12:34.850 "name": "null0" 00:12:34.850 }, 00:12:34.850 "method": "bdev_xnvme_create" 00:12:34.850 }, 00:12:34.850 { 00:12:34.850 "method": "bdev_wait_for_examine" 00:12:34.850 } 00:12:34.850 ] 00:12:34.850 } 00:12:34.850 ] 00:12:34.850 } 00:12:34.850 [2024-10-01 03:39:27.309701] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:34.850 [2024-10-01 03:39:27.309825] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69099 ] 00:12:35.111 [2024-10-01 03:39:27.459985] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:35.372 [2024-10-01 03:39:27.726222] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.371  Copying: 220/1024 [MB] (220 MBps) Copying: 496/1024 [MB] (275 MBps) Copying: 792/1024 [MB] (296 MBps) Copying: 1024/1024 [MB] (average 270 MBps) 00:12:42.371 00:12:42.631 03:39:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:42.631 03:39:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:42.631 03:39:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:42.631 03:39:34 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:42.631 { 00:12:42.631 "subsystems": [ 00:12:42.631 { 00:12:42.631 "subsystem": "bdev", 00:12:42.631 "config": [ 00:12:42.631 { 00:12:42.631 "params": { 00:12:42.631 "block_size": 512, 00:12:42.631 "num_blocks": 2097152, 00:12:42.631 "name": "malloc0" 00:12:42.631 }, 00:12:42.631 "method": "bdev_malloc_create" 00:12:42.631 }, 00:12:42.631 { 00:12:42.631 "params": { 00:12:42.631 "io_mechanism": "libaio", 00:12:42.631 "filename": "/dev/nullb0", 00:12:42.631 "name": "null0" 00:12:42.631 }, 00:12:42.631 "method": "bdev_xnvme_create" 00:12:42.631 }, 00:12:42.631 { 00:12:42.631 "method": "bdev_wait_for_examine" 00:12:42.631 } 00:12:42.631 ] 00:12:42.631 } 00:12:42.631 ] 00:12:42.631 } 00:12:42.631 [2024-10-01 03:39:34.997188] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:42.631 [2024-10-01 03:39:34.997362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69187 ] 00:12:42.631 [2024-10-01 03:39:35.145220] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:42.892 [2024-10-01 03:39:35.399419] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.169  Copying: 221/1024 [MB] (221 MBps) Copying: 489/1024 [MB] (268 MBps) Copying: 789/1024 [MB] (299 MBps) Copying: 1024/1024 [MB] (average 270 MBps) 00:12:50.169 00:12:50.169 03:39:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:12:50.169 03:39:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:50.169 03:39:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:12:50.169 03:39:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:12:50.169 03:39:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:50.169 03:39:42 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:50.169 { 00:12:50.169 "subsystems": [ 00:12:50.169 { 00:12:50.169 "subsystem": "bdev", 00:12:50.169 "config": [ 00:12:50.169 { 00:12:50.169 "params": { 00:12:50.169 "block_size": 512, 00:12:50.169 "num_blocks": 2097152, 00:12:50.169 "name": "malloc0" 00:12:50.169 }, 00:12:50.169 "method": "bdev_malloc_create" 00:12:50.169 }, 00:12:50.169 { 00:12:50.169 "params": { 00:12:50.169 "io_mechanism": "io_uring", 00:12:50.169 "filename": "/dev/nullb0", 00:12:50.169 "name": "null0" 00:12:50.169 }, 00:12:50.169 "method": "bdev_xnvme_create" 00:12:50.169 }, 00:12:50.169 { 00:12:50.169 "method": "bdev_wait_for_examine" 00:12:50.169 } 00:12:50.169 ] 00:12:50.169 } 00:12:50.169 ] 00:12:50.169 } 00:12:50.169 [2024-10-01 03:39:42.689578] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:50.169 [2024-10-01 03:39:42.689698] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69281 ] 00:12:50.451 [2024-10-01 03:39:42.841546] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.753 [2024-10-01 03:39:43.100128] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.928  Copying: 247/1024 [MB] (247 MBps) Copying: 554/1024 [MB] (306 MBps) Copying: 861/1024 [MB] (306 MBps) Copying: 1024/1024 [MB] (average 290 MBps) 00:12:57.928 00:12:57.928 03:39:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:12:57.928 03:39:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:12:57.928 03:39:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:12:57.928 03:39:50 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:12:57.928 { 00:12:57.928 "subsystems": [ 00:12:57.928 { 00:12:57.928 "subsystem": "bdev", 00:12:57.928 "config": [ 00:12:57.928 { 00:12:57.928 "params": { 00:12:57.928 "block_size": 512, 00:12:57.928 "num_blocks": 2097152, 00:12:57.928 "name": "malloc0" 00:12:57.928 }, 00:12:57.928 "method": "bdev_malloc_create" 00:12:57.928 }, 00:12:57.928 { 00:12:57.928 "params": { 00:12:57.928 "io_mechanism": "io_uring", 00:12:57.928 "filename": "/dev/nullb0", 00:12:57.928 "name": "null0" 00:12:57.928 }, 00:12:57.928 "method": "bdev_xnvme_create" 00:12:57.928 }, 00:12:57.928 { 00:12:57.928 "method": "bdev_wait_for_examine" 00:12:57.928 } 00:12:57.928 ] 00:12:57.928 } 00:12:57.928 ] 00:12:57.928 } 00:12:57.928 [2024-10-01 03:39:50.073929] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:12:57.928 [2024-10-01 03:39:50.074072] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69368 ] 00:12:57.928 [2024-10-01 03:39:50.221283] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:57.928 [2024-10-01 03:39:50.396442] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:04.210  Copying: 310/1024 [MB] (310 MBps) Copying: 622/1024 [MB] (311 MBps) Copying: 927/1024 [MB] (305 MBps) Copying: 1024/1024 [MB] (average 309 MBps) 00:13:04.210 00:13:04.210 03:39:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:13:04.210 03:39:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:04.210 00:13:04.210 real 0m29.499s 00:13:04.210 user 0m25.263s 00:13:04.210 sys 0m3.668s 00:13:04.210 03:39:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.210 ************************************ 00:13:04.210 END TEST xnvme_to_malloc_dd_copy 00:13:04.210 ************************************ 00:13:04.210 03:39:56 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:04.498 03:39:56 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:04.498 03:39:56 nvme_xnvme -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:13:04.498 03:39:56 nvme_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:04.498 03:39:56 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:04.498 ************************************ 00:13:04.498 START TEST xnvme_bdevperf 00:13:04.498 ************************************ 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1125 -- # xnvme_bdevperf 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:04.498 03:39:56 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:04.498 { 00:13:04.498 "subsystems": [ 00:13:04.498 { 00:13:04.498 "subsystem": "bdev", 00:13:04.498 "config": [ 00:13:04.498 { 00:13:04.498 "params": { 00:13:04.498 "io_mechanism": "libaio", 00:13:04.498 "filename": "/dev/nullb0", 00:13:04.498 "name": "null0" 00:13:04.498 }, 00:13:04.498 "method": "bdev_xnvme_create" 00:13:04.498 }, 00:13:04.498 { 00:13:04.498 "method": "bdev_wait_for_examine" 00:13:04.498 } 00:13:04.498 ] 00:13:04.498 } 00:13:04.498 ] 00:13:04.498 } 00:13:04.498 [2024-10-01 03:39:56.895774] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:04.498 [2024-10-01 03:39:56.895901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69473 ] 00:13:04.759 [2024-10-01 03:39:57.048274] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:04.759 [2024-10-01 03:39:57.219785] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.019 Running I/O for 5 seconds... 00:13:10.165 201920.00 IOPS, 788.75 MiB/s 201888.00 IOPS, 788.62 MiB/s 201749.33 IOPS, 788.08 MiB/s 201952.00 IOPS, 788.88 MiB/s 202137.60 IOPS, 789.60 MiB/s 00:13:10.165 Latency(us) 00:13:10.165 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:10.165 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:10.165 null0 : 5.00 202083.62 789.39 0.00 0.00 314.46 123.67 1562.78 00:13:10.165 =================================================================================================================== 00:13:10.165 Total : 202083.62 789.39 0.00 0.00 314.46 123.67 1562.78 00:13:10.735 03:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:13:10.735 03:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:10.735 03:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:13:10.735 03:40:03 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:13:10.735 03:40:03 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:10.735 03:40:03 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:10.735 { 00:13:10.735 "subsystems": [ 00:13:10.735 { 00:13:10.735 "subsystem": "bdev", 00:13:10.735 "config": [ 00:13:10.735 { 00:13:10.735 "params": { 00:13:10.735 "io_mechanism": "io_uring", 00:13:10.735 "filename": "/dev/nullb0", 00:13:10.735 "name": "null0" 00:13:10.735 }, 00:13:10.735 "method": "bdev_xnvme_create" 00:13:10.735 }, 00:13:10.735 { 00:13:10.735 "method": "bdev_wait_for_examine" 00:13:10.735 } 00:13:10.735 ] 00:13:10.735 } 00:13:10.735 ] 00:13:10.735 } 00:13:10.735 [2024-10-01 03:40:03.211912] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:10.735 [2024-10-01 03:40:03.212183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69553 ] 00:13:10.996 [2024-10-01 03:40:03.355345] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:10.996 [2024-10-01 03:40:03.530168] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:11.255 Running I/O for 5 seconds... 00:13:16.403 230848.00 IOPS, 901.75 MiB/s 230752.00 IOPS, 901.38 MiB/s 230720.00 IOPS, 901.25 MiB/s 230800.00 IOPS, 901.56 MiB/s 00:13:16.403 Latency(us) 00:13:16.403 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:16.403 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:16.403 null0 : 5.00 230859.16 901.79 0.00 0.00 274.95 152.81 1512.37 00:13:16.403 =================================================================================================================== 00:13:16.403 Total : 230859.16 901.79 0.00 0.00 274.95 152.81 1512.37 00:13:16.975 03:40:09 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:13:16.975 03:40:09 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:13:16.975 ************************************ 00:13:16.975 END TEST xnvme_bdevperf 00:13:16.975 ************************************ 00:13:16.975 00:13:16.975 real 0m12.689s 00:13:16.975 user 0m10.236s 00:13:16.975 sys 0m2.193s 00:13:16.975 03:40:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:16.975 03:40:09 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:17.237 ************************************ 00:13:17.237 END TEST nvme_xnvme 00:13:17.237 ************************************ 00:13:17.237 00:13:17.237 real 0m42.470s 00:13:17.237 user 0m35.612s 00:13:17.237 sys 0m5.986s 00:13:17.237 03:40:09 nvme_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:17.237 03:40:09 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.237 03:40:09 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:17.237 03:40:09 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:17.237 03:40:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:17.237 03:40:09 -- common/autotest_common.sh@10 -- # set +x 00:13:17.237 ************************************ 00:13:17.237 START TEST blockdev_xnvme 00:13:17.237 ************************************ 00:13:17.237 03:40:09 blockdev_xnvme -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:13:17.237 * Looking for test storage... 00:13:17.237 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:13:17.237 03:40:09 blockdev_xnvme -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:13:17.237 03:40:09 blockdev_xnvme -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:13:17.237 03:40:09 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lcov --version 00:13:17.237 03:40:09 blockdev_xnvme -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:13:17.237 03:40:09 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:17.237 03:40:09 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:17.237 03:40:09 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:17.237 03:40:09 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:17.237 03:40:09 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:17.237 03:40:09 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:17.237 03:40:09 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:17.237 03:40:09 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:17.237 03:40:09 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:17.238 03:40:09 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:13:17.238 03:40:09 blockdev_xnvme -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:17.238 03:40:09 blockdev_xnvme -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:13:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.238 --rc genhtml_branch_coverage=1 00:13:17.238 --rc genhtml_function_coverage=1 00:13:17.238 --rc genhtml_legend=1 00:13:17.238 --rc geninfo_all_blocks=1 00:13:17.238 --rc geninfo_unexecuted_blocks=1 00:13:17.238 00:13:17.238 ' 00:13:17.238 03:40:09 blockdev_xnvme -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:13:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.238 --rc genhtml_branch_coverage=1 00:13:17.238 --rc genhtml_function_coverage=1 00:13:17.238 --rc genhtml_legend=1 00:13:17.238 --rc geninfo_all_blocks=1 00:13:17.238 --rc geninfo_unexecuted_blocks=1 00:13:17.238 00:13:17.238 ' 00:13:17.238 03:40:09 blockdev_xnvme -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:13:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.238 --rc genhtml_branch_coverage=1 00:13:17.238 --rc genhtml_function_coverage=1 00:13:17.238 --rc genhtml_legend=1 00:13:17.238 --rc geninfo_all_blocks=1 00:13:17.238 --rc geninfo_unexecuted_blocks=1 00:13:17.238 00:13:17.238 ' 00:13:17.238 03:40:09 blockdev_xnvme -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:13:17.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:17.238 --rc genhtml_branch_coverage=1 00:13:17.238 --rc genhtml_function_coverage=1 00:13:17.238 --rc genhtml_legend=1 00:13:17.238 --rc geninfo_all_blocks=1 00:13:17.238 --rc geninfo_unexecuted_blocks=1 00:13:17.238 00:13:17.238 ' 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=69696 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 69696 00:13:17.238 03:40:09 blockdev_xnvme -- common/autotest_common.sh@831 -- # '[' -z 69696 ']' 00:13:17.238 03:40:09 blockdev_xnvme -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:17.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:17.238 03:40:09 blockdev_xnvme -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:17.238 03:40:09 blockdev_xnvme -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:17.238 03:40:09 blockdev_xnvme -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:17.238 03:40:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:17.238 03:40:09 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:13:17.499 [2024-10-01 03:40:09.840655] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:17.499 [2024-10-01 03:40:09.841262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69696 ] 00:13:17.499 [2024-10-01 03:40:09.991802] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:17.760 [2024-10-01 03:40:10.175530] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:18.332 03:40:10 blockdev_xnvme -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:18.332 03:40:10 blockdev_xnvme -- common/autotest_common.sh@864 -- # return 0 00:13:18.332 03:40:10 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:13:18.332 03:40:10 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:13:18.332 03:40:10 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:13:18.332 03:40:10 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:13:18.332 03:40:10 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:13:18.593 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:18.854 Waiting for block devices as requested 00:13:18.854 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:13:18.854 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:13:18.854 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:13:19.115 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:13:24.419 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.419 nvme0n1 00:13:24.419 nvme1n1 00:13:24.419 nvme2n1 00:13:24.419 nvme2n2 00:13:24.419 nvme2n3 00:13:24.419 nvme3n1 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.419 03:40:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:13:24.419 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:13:24.420 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:24.420 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:13:24.420 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7ae42eb6-06ea-4e8e-a73c-1e12522f90ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7ae42eb6-06ea-4e8e-a73c-1e12522f90ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a7fd4912-f97f-41d1-a5e3-3234eb290bc1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a7fd4912-f97f-41d1-a5e3-3234eb290bc1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "dc212266-ff72-4c2d-96a7-72c572802309"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc212266-ff72-4c2d-96a7-72c572802309",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e752fde3-69fa-492c-9b4f-98d78c50eba9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e752fde3-69fa-492c-9b4f-98d78c50eba9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "cc35eb9d-1f71-4de9-bc66-15215ce0dd55"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cc35eb9d-1f71-4de9-bc66-15215ce0dd55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a678c095-b7c3-4224-8380-96fa0271a616"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a678c095-b7c3-4224-8380-96fa0271a616",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:24.420 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:13:24.420 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:13:24.420 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:13:24.420 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:13:24.420 03:40:16 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 69696 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@950 -- # '[' -z 69696 ']' 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@954 -- # kill -0 69696 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@955 -- # uname 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69696 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:24.420 killing process with pid 69696 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69696' 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@969 -- # kill 69696 00:13:24.420 03:40:16 blockdev_xnvme -- common/autotest_common.sh@974 -- # wait 69696 00:13:25.805 03:40:18 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:25.805 03:40:18 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:25.805 03:40:18 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:25.805 03:40:18 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:25.805 03:40:18 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:25.805 ************************************ 00:13:25.805 START TEST bdev_hello_world 00:13:25.805 ************************************ 00:13:25.805 03:40:18 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:13:25.805 [2024-10-01 03:40:18.134908] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:25.805 [2024-10-01 03:40:18.135039] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70055 ] 00:13:25.805 [2024-10-01 03:40:18.285384] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:26.066 [2024-10-01 03:40:18.466338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:26.327 [2024-10-01 03:40:18.772644] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:13:26.327 [2024-10-01 03:40:18.772688] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:13:26.327 [2024-10-01 03:40:18.772701] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:13:26.327 [2024-10-01 03:40:18.774301] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:13:26.327 [2024-10-01 03:40:18.774766] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:13:26.327 [2024-10-01 03:40:18.774787] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:13:26.327 [2024-10-01 03:40:18.775120] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:13:26.327 00:13:26.327 [2024-10-01 03:40:18.775137] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:13:27.267 00:13:27.267 real 0m1.382s 00:13:27.267 user 0m1.075s 00:13:27.267 sys 0m0.191s 00:13:27.267 03:40:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:27.267 03:40:19 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:13:27.267 ************************************ 00:13:27.267 END TEST bdev_hello_world 00:13:27.267 ************************************ 00:13:27.267 03:40:19 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:13:27.267 03:40:19 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:27.267 03:40:19 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:27.267 03:40:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.267 ************************************ 00:13:27.267 START TEST bdev_bounds 00:13:27.267 ************************************ 00:13:27.267 03:40:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:13:27.267 03:40:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=70092 00:13:27.267 03:40:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:13:27.267 Process bdevio pid: 70092 00:13:27.267 03:40:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:27.267 03:40:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 70092' 00:13:27.268 03:40:19 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 70092 00:13:27.268 03:40:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 70092 ']' 00:13:27.268 03:40:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.268 03:40:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:27.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.268 03:40:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.268 03:40:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:27.268 03:40:19 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:27.268 [2024-10-01 03:40:19.575255] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:27.268 [2024-10-01 03:40:19.575376] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70092 ] 00:13:27.268 [2024-10-01 03:40:19.721649] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:27.529 [2024-10-01 03:40:19.893338] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:27.529 [2024-10-01 03:40:19.893619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:27.529 [2024-10-01 03:40:19.893637] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.101 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:28.101 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:13:28.101 03:40:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:13:28.101 I/O targets: 00:13:28.101 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:13:28.101 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:13:28.102 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:28.102 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:28.102 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:13:28.102 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:13:28.102 00:13:28.102 00:13:28.102 CUnit - A unit testing framework for C - Version 2.1-3 00:13:28.102 http://cunit.sourceforge.net/ 00:13:28.102 00:13:28.102 00:13:28.102 Suite: bdevio tests on: nvme3n1 00:13:28.102 Test: blockdev write read block ...passed 00:13:28.102 Test: blockdev write zeroes read block ...passed 00:13:28.102 Test: blockdev write zeroes read no split ...passed 00:13:28.102 Test: blockdev write zeroes read split ...passed 00:13:28.102 Test: blockdev write zeroes read split partial ...passed 00:13:28.102 Test: blockdev reset ...passed 00:13:28.102 Test: blockdev write read 8 blocks ...passed 00:13:28.102 Test: blockdev write read size > 128k ...passed 00:13:28.102 Test: blockdev write read invalid size ...passed 00:13:28.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:28.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:28.102 Test: blockdev write read max offset ...passed 00:13:28.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:28.102 Test: blockdev writev readv 8 blocks ...passed 00:13:28.102 Test: blockdev writev readv 30 x 1block ...passed 00:13:28.102 Test: blockdev writev readv block ...passed 00:13:28.102 Test: blockdev writev readv size > 128k ...passed 00:13:28.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:28.102 Test: blockdev comparev and writev ...passed 00:13:28.102 Test: blockdev nvme passthru rw ...passed 00:13:28.102 Test: blockdev nvme passthru vendor specific ...passed 00:13:28.102 Test: blockdev nvme admin passthru ...passed 00:13:28.102 Test: blockdev copy ...passed 00:13:28.102 Suite: bdevio tests on: nvme2n3 00:13:28.102 Test: blockdev write read block ...passed 00:13:28.102 Test: blockdev write zeroes read block ...passed 00:13:28.102 Test: blockdev write zeroes read no split ...passed 00:13:28.102 Test: blockdev write zeroes read split ...passed 00:13:28.102 Test: blockdev write zeroes read split partial ...passed 00:13:28.102 Test: blockdev reset ...passed 00:13:28.102 Test: blockdev write read 8 blocks ...passed 00:13:28.102 Test: blockdev write read size > 128k ...passed 00:13:28.102 Test: blockdev write read invalid size ...passed 00:13:28.102 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:28.102 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:28.102 Test: blockdev write read max offset ...passed 00:13:28.102 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:28.102 Test: blockdev writev readv 8 blocks ...passed 00:13:28.102 Test: blockdev writev readv 30 x 1block ...passed 00:13:28.102 Test: blockdev writev readv block ...passed 00:13:28.102 Test: blockdev writev readv size > 128k ...passed 00:13:28.102 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:28.102 Test: blockdev comparev and writev ...passed 00:13:28.102 Test: blockdev nvme passthru rw ...passed 00:13:28.102 Test: blockdev nvme passthru vendor specific ...passed 00:13:28.102 Test: blockdev nvme admin passthru ...passed 00:13:28.102 Test: blockdev copy ...passed 00:13:28.102 Suite: bdevio tests on: nvme2n2 00:13:28.102 Test: blockdev write read block ...passed 00:13:28.102 Test: blockdev write zeroes read block ...passed 00:13:28.362 Test: blockdev write zeroes read no split ...passed 00:13:28.362 Test: blockdev write zeroes read split ...passed 00:13:28.362 Test: blockdev write zeroes read split partial ...passed 00:13:28.362 Test: blockdev reset ...passed 00:13:28.362 Test: blockdev write read 8 blocks ...passed 00:13:28.362 Test: blockdev write read size > 128k ...passed 00:13:28.362 Test: blockdev write read invalid size ...passed 00:13:28.362 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:28.362 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:28.362 Test: blockdev write read max offset ...passed 00:13:28.362 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:28.362 Test: blockdev writev readv 8 blocks ...passed 00:13:28.362 Test: blockdev writev readv 30 x 1block ...passed 00:13:28.362 Test: blockdev writev readv block ...passed 00:13:28.362 Test: blockdev writev readv size > 128k ...passed 00:13:28.362 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:28.362 Test: blockdev comparev and writev ...passed 00:13:28.362 Test: blockdev nvme passthru rw ...passed 00:13:28.362 Test: blockdev nvme passthru vendor specific ...passed 00:13:28.362 Test: blockdev nvme admin passthru ...passed 00:13:28.362 Test: blockdev copy ...passed 00:13:28.362 Suite: bdevio tests on: nvme2n1 00:13:28.362 Test: blockdev write read block ...passed 00:13:28.362 Test: blockdev write zeroes read block ...passed 00:13:28.362 Test: blockdev write zeroes read no split ...passed 00:13:28.362 Test: blockdev write zeroes read split ...passed 00:13:28.362 Test: blockdev write zeroes read split partial ...passed 00:13:28.362 Test: blockdev reset ...passed 00:13:28.362 Test: blockdev write read 8 blocks ...passed 00:13:28.362 Test: blockdev write read size > 128k ...passed 00:13:28.362 Test: blockdev write read invalid size ...passed 00:13:28.362 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:28.362 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:28.362 Test: blockdev write read max offset ...passed 00:13:28.362 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:28.362 Test: blockdev writev readv 8 blocks ...passed 00:13:28.362 Test: blockdev writev readv 30 x 1block ...passed 00:13:28.362 Test: blockdev writev readv block ...passed 00:13:28.362 Test: blockdev writev readv size > 128k ...passed 00:13:28.362 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:28.362 Test: blockdev comparev and writev ...passed 00:13:28.362 Test: blockdev nvme passthru rw ...passed 00:13:28.362 Test: blockdev nvme passthru vendor specific ...passed 00:13:28.362 Test: blockdev nvme admin passthru ...passed 00:13:28.362 Test: blockdev copy ...passed 00:13:28.362 Suite: bdevio tests on: nvme1n1 00:13:28.362 Test: blockdev write read block ...passed 00:13:28.362 Test: blockdev write zeroes read block ...passed 00:13:28.362 Test: blockdev write zeroes read no split ...passed 00:13:28.362 Test: blockdev write zeroes read split ...passed 00:13:28.362 Test: blockdev write zeroes read split partial ...passed 00:13:28.362 Test: blockdev reset ...passed 00:13:28.362 Test: blockdev write read 8 blocks ...passed 00:13:28.362 Test: blockdev write read size > 128k ...passed 00:13:28.362 Test: blockdev write read invalid size ...passed 00:13:28.362 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:28.362 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:28.362 Test: blockdev write read max offset ...passed 00:13:28.362 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:28.362 Test: blockdev writev readv 8 blocks ...passed 00:13:28.362 Test: blockdev writev readv 30 x 1block ...passed 00:13:28.362 Test: blockdev writev readv block ...passed 00:13:28.362 Test: blockdev writev readv size > 128k ...passed 00:13:28.362 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:28.362 Test: blockdev comparev and writev ...passed 00:13:28.362 Test: blockdev nvme passthru rw ...passed 00:13:28.362 Test: blockdev nvme passthru vendor specific ...passed 00:13:28.362 Test: blockdev nvme admin passthru ...passed 00:13:28.362 Test: blockdev copy ...passed 00:13:28.362 Suite: bdevio tests on: nvme0n1 00:13:28.362 Test: blockdev write read block ...passed 00:13:28.362 Test: blockdev write zeroes read block ...passed 00:13:28.362 Test: blockdev write zeroes read no split ...passed 00:13:28.362 Test: blockdev write zeroes read split ...passed 00:13:28.623 Test: blockdev write zeroes read split partial ...passed 00:13:28.623 Test: blockdev reset ...passed 00:13:28.623 Test: blockdev write read 8 blocks ...passed 00:13:28.623 Test: blockdev write read size > 128k ...passed 00:13:28.623 Test: blockdev write read invalid size ...passed 00:13:28.623 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:28.623 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:28.623 Test: blockdev write read max offset ...passed 00:13:28.623 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:28.623 Test: blockdev writev readv 8 blocks ...passed 00:13:28.623 Test: blockdev writev readv 30 x 1block ...passed 00:13:28.623 Test: blockdev writev readv block ...passed 00:13:28.623 Test: blockdev writev readv size > 128k ...passed 00:13:28.623 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:28.623 Test: blockdev comparev and writev ...passed 00:13:28.623 Test: blockdev nvme passthru rw ...passed 00:13:28.623 Test: blockdev nvme passthru vendor specific ...passed 00:13:28.623 Test: blockdev nvme admin passthru ...passed 00:13:28.623 Test: blockdev copy ...passed 00:13:28.623 00:13:28.623 Run Summary: Type Total Ran Passed Failed Inactive 00:13:28.623 suites 6 6 n/a 0 0 00:13:28.623 tests 138 138 138 0 0 00:13:28.623 asserts 780 780 780 0 n/a 00:13:28.623 00:13:28.623 Elapsed time = 1.076 seconds 00:13:28.623 0 00:13:28.623 03:40:20 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 70092 00:13:28.623 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 70092 ']' 00:13:28.623 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 70092 00:13:28.623 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:13:28.623 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:28.623 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70092 00:13:28.623 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:28.623 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:28.623 killing process with pid 70092 00:13:28.623 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70092' 00:13:28.623 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@969 -- # kill 70092 00:13:28.623 03:40:20 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@974 -- # wait 70092 00:13:29.194 03:40:21 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:13:29.194 00:13:29.194 real 0m2.133s 00:13:29.194 user 0m5.011s 00:13:29.194 sys 0m0.315s 00:13:29.194 ************************************ 00:13:29.194 END TEST bdev_bounds 00:13:29.194 ************************************ 00:13:29.194 03:40:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.194 03:40:21 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:13:29.194 03:40:21 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:29.194 03:40:21 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:13:29.194 03:40:21 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.194 03:40:21 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:29.194 ************************************ 00:13:29.194 START TEST bdev_nbd 00:13:29.194 ************************************ 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=70147 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 70147 /var/tmp/spdk-nbd.sock 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 70147 ']' 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:29.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:29.194 03:40:21 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:13:29.456 [2024-10-01 03:40:21.789815] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:29.456 [2024-10-01 03:40:21.790424] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.456 [2024-10-01 03:40:21.945101] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:29.716 [2024-10-01 03:40:22.214422] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:30.288 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:30.288 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:13:30.288 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:30.288 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.288 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:30.288 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:13:30.289 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:13:30.289 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:30.289 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:30.289 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:13:30.289 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:13:30.289 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:13:30.289 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:13:30.289 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:30.289 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:13:30.549 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:13:30.549 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:13:30.549 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:13:30.549 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:30.549 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:30.549 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:30.549 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:30.549 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:30.549 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:30.550 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:30.550 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:30.550 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.550 1+0 records in 00:13:30.550 1+0 records out 00:13:30.550 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00096596 s, 4.2 MB/s 00:13:30.550 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.550 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:30.550 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.550 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:30.550 03:40:22 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:30.550 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:30.550 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:30.550 03:40:22 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:30.810 1+0 records in 00:13:30.810 1+0 records out 00:13:30.810 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00134076 s, 3.1 MB/s 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:30.810 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd2 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd2 /proc/partitions 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.071 1+0 records in 00:13:31.071 1+0 records out 00:13:31.071 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00140623 s, 2.9 MB/s 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:31.071 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd3 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd3 /proc/partitions 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.331 1+0 records in 00:13:31.331 1+0 records out 00:13:31.331 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00123707 s, 3.3 MB/s 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.331 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:31.332 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.332 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:31.332 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:31.332 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:31.332 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:31.332 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd4 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd4 /proc/partitions 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.593 1+0 records in 00:13:31.593 1+0 records out 00:13:31.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00129704 s, 3.2 MB/s 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:31.593 03:40:23 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd5 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd5 /proc/partitions 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:31.854 1+0 records in 00:13:31.854 1+0 records out 00:13:31.854 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000906161 s, 4.5 MB/s 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:13:31.854 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:32.114 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:13:32.114 { 00:13:32.114 "nbd_device": "/dev/nbd0", 00:13:32.114 "bdev_name": "nvme0n1" 00:13:32.114 }, 00:13:32.114 { 00:13:32.114 "nbd_device": "/dev/nbd1", 00:13:32.114 "bdev_name": "nvme1n1" 00:13:32.114 }, 00:13:32.114 { 00:13:32.114 "nbd_device": "/dev/nbd2", 00:13:32.114 "bdev_name": "nvme2n1" 00:13:32.114 }, 00:13:32.114 { 00:13:32.114 "nbd_device": "/dev/nbd3", 00:13:32.114 "bdev_name": "nvme2n2" 00:13:32.114 }, 00:13:32.114 { 00:13:32.114 "nbd_device": "/dev/nbd4", 00:13:32.114 "bdev_name": "nvme2n3" 00:13:32.115 }, 00:13:32.115 { 00:13:32.115 "nbd_device": "/dev/nbd5", 00:13:32.115 "bdev_name": "nvme3n1" 00:13:32.115 } 00:13:32.115 ]' 00:13:32.115 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:13:32.115 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:13:32.115 { 00:13:32.115 "nbd_device": "/dev/nbd0", 00:13:32.115 "bdev_name": "nvme0n1" 00:13:32.115 }, 00:13:32.115 { 00:13:32.115 "nbd_device": "/dev/nbd1", 00:13:32.115 "bdev_name": "nvme1n1" 00:13:32.115 }, 00:13:32.115 { 00:13:32.115 "nbd_device": "/dev/nbd2", 00:13:32.115 "bdev_name": "nvme2n1" 00:13:32.115 }, 00:13:32.115 { 00:13:32.115 "nbd_device": "/dev/nbd3", 00:13:32.115 "bdev_name": "nvme2n2" 00:13:32.115 }, 00:13:32.115 { 00:13:32.115 "nbd_device": "/dev/nbd4", 00:13:32.115 "bdev_name": "nvme2n3" 00:13:32.115 }, 00:13:32.115 { 00:13:32.115 "nbd_device": "/dev/nbd5", 00:13:32.115 "bdev_name": "nvme3n1" 00:13:32.115 } 00:13:32.115 ]' 00:13:32.115 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:13:32.115 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:13:32.115 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:32.115 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:13:32.115 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:32.115 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:32.115 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.115 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.377 03:40:24 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:13:32.638 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:13:32.638 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:13:32.639 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:13:32.639 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.639 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.639 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:13:32.639 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.639 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.639 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.639 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:13:32.900 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:13:32.900 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:13:32.900 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:13:32.900 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:32.900 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:32.900 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:13:32.900 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:32.900 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:32.900 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:32.900 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:13:33.162 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:13:33.162 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:13:33.162 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:13:33.162 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.162 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.162 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:13:33.162 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:33.162 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.162 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:33.162 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:13:33.441 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:13:33.441 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:13:33.441 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:13:33.441 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:33.441 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:33.441 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:13:33.441 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:33.441 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:33.441 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:33.441 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.441 03:40:25 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:33.720 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:13:33.720 /dev/nbd0 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.981 1+0 records in 00:13:33.981 1+0 records out 00:13:33.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000834262 s, 4.9 MB/s 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:13:33.981 /dev/nbd1 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:33.981 1+0 records in 00:13:33.981 1+0 records out 00:13:33.981 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00135225 s, 3.0 MB/s 00:13:33.981 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:13:34.243 /dev/nbd10 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd10 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd10 /proc/partitions 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.243 1+0 records in 00:13:34.243 1+0 records out 00:13:34.243 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00144131 s, 2.8 MB/s 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:34.243 03:40:26 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:13:34.503 /dev/nbd11 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd11 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd11 /proc/partitions 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.503 1+0 records in 00:13:34.503 1+0 records out 00:13:34.503 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106735 s, 3.8 MB/s 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.503 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:34.504 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.504 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.504 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:34.504 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.504 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:34.504 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:13:34.764 /dev/nbd12 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd12 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd12 /proc/partitions 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.764 1+0 records in 00:13:34.764 1+0 records out 00:13:34.764 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00115565 s, 3.5 MB/s 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:34.764 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:13:35.025 /dev/nbd13 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd13 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd13 /proc/partitions 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:35.025 1+0 records in 00:13:35.025 1+0 records out 00:13:35.025 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110549 s, 3.7 MB/s 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:35.025 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd0", 00:13:35.286 "bdev_name": "nvme0n1" 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd1", 00:13:35.286 "bdev_name": "nvme1n1" 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd10", 00:13:35.286 "bdev_name": "nvme2n1" 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd11", 00:13:35.286 "bdev_name": "nvme2n2" 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd12", 00:13:35.286 "bdev_name": "nvme2n3" 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd13", 00:13:35.286 "bdev_name": "nvme3n1" 00:13:35.286 } 00:13:35.286 ]' 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd0", 00:13:35.286 "bdev_name": "nvme0n1" 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd1", 00:13:35.286 "bdev_name": "nvme1n1" 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd10", 00:13:35.286 "bdev_name": "nvme2n1" 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd11", 00:13:35.286 "bdev_name": "nvme2n2" 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd12", 00:13:35.286 "bdev_name": "nvme2n3" 00:13:35.286 }, 00:13:35.286 { 00:13:35.286 "nbd_device": "/dev/nbd13", 00:13:35.286 "bdev_name": "nvme3n1" 00:13:35.286 } 00:13:35.286 ]' 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:13:35.286 /dev/nbd1 00:13:35.286 /dev/nbd10 00:13:35.286 /dev/nbd11 00:13:35.286 /dev/nbd12 00:13:35.286 /dev/nbd13' 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:13:35.286 /dev/nbd1 00:13:35.286 /dev/nbd10 00:13:35.286 /dev/nbd11 00:13:35.286 /dev/nbd12 00:13:35.286 /dev/nbd13' 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:35.286 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:35.287 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:13:35.287 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:35.287 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:13:35.287 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:13:35.287 256+0 records in 00:13:35.287 256+0 records out 00:13:35.287 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00456487 s, 230 MB/s 00:13:35.287 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:35.287 03:40:27 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:13:35.547 256+0 records in 00:13:35.547 256+0 records out 00:13:35.547 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.209161 s, 5.0 MB/s 00:13:35.547 03:40:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:35.548 03:40:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:13:36.120 256+0 records in 00:13:36.120 256+0 records out 00:13:36.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.323888 s, 3.2 MB/s 00:13:36.120 03:40:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.120 03:40:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:13:36.120 256+0 records in 00:13:36.120 256+0 records out 00:13:36.120 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.260102 s, 4.0 MB/s 00:13:36.120 03:40:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.120 03:40:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:13:36.380 256+0 records in 00:13:36.380 256+0 records out 00:13:36.380 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.244881 s, 4.3 MB/s 00:13:36.380 03:40:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.380 03:40:28 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:13:36.643 256+0 records in 00:13:36.643 256+0 records out 00:13:36.643 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.252082 s, 4.2 MB/s 00:13:36.643 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:13:36.643 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:13:36.903 256+0 records in 00:13:36.903 256+0 records out 00:13:36.903 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.255611 s, 4.1 MB/s 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:36.903 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:37.164 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:37.164 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:37.164 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:37.164 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.164 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.164 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:37.164 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.164 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.164 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.164 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:13:37.426 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:37.426 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:37.426 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:37.426 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.426 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.426 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:37.426 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.426 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.426 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.426 03:40:29 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:13:37.686 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:13:37.687 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:13:37.687 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:13:37.687 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.687 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.687 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:13:37.687 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.687 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.687 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.687 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:13:37.948 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:13:37.948 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:13:37.948 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:13:37.948 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:37.948 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:37.948 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:13:37.948 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:37.948 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:37.948 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:37.948 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.209 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:13:38.470 03:40:30 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:13:38.731 malloc_lvol_verify 00:13:38.731 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:13:38.991 b948459a-a4c8-4abd-b7d1-67c55bc547f8 00:13:38.991 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:13:39.251 a777c618-90ee-41f1-9b3c-7f3ead440a4a 00:13:39.251 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:13:39.511 /dev/nbd0 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:13:39.511 mke2fs 1.47.0 (5-Feb-2023) 00:13:39.511 Discarding device blocks: 0/4096 done 00:13:39.511 Creating filesystem with 4096 1k blocks and 1024 inodes 00:13:39.511 00:13:39.511 Allocating group tables: 0/1 done 00:13:39.511 Writing inode tables: 0/1 done 00:13:39.511 Creating journal (1024 blocks): done 00:13:39.511 Writing superblocks and filesystem accounting information: 0/1 done 00:13:39.511 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:39.511 03:40:31 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 70147 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 70147 ']' 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 70147 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:39.511 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70147 00:13:39.771 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:39.771 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:39.771 killing process with pid 70147 00:13:39.771 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70147' 00:13:39.771 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@969 -- # kill 70147 00:13:39.772 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@974 -- # wait 70147 00:13:40.406 03:40:32 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:13:40.406 00:13:40.406 real 0m11.080s 00:13:40.406 user 0m14.581s 00:13:40.406 sys 0m3.944s 00:13:40.406 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:40.406 ************************************ 00:13:40.406 END TEST bdev_nbd 00:13:40.406 ************************************ 00:13:40.406 03:40:32 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:13:40.406 03:40:32 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:13:40.406 03:40:32 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:13:40.406 03:40:32 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:13:40.406 03:40:32 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:13:40.406 03:40:32 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:40.406 03:40:32 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.406 03:40:32 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:40.406 ************************************ 00:13:40.406 START TEST bdev_fio 00:13:40.406 ************************************ 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:13:40.406 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:40.406 ************************************ 00:13:40.406 START TEST bdev_fio_rw_verify 00:13:40.406 ************************************ 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:40.406 03:40:32 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:13:40.666 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:40.666 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:40.666 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:40.666 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:40.666 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:40.666 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:13:40.666 fio-3.35 00:13:40.666 Starting 6 threads 00:13:52.904 00:13:52.904 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=70552: Tue Oct 1 03:40:43 2024 00:13:52.904 read: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(464MiB/10002msec) 00:13:52.904 slat (usec): min=2, max=2119, avg= 6.22, stdev=15.47 00:13:52.904 clat (usec): min=108, max=8284, avg=1699.54, stdev=833.59 00:13:52.904 lat (usec): min=111, max=8287, avg=1705.76, stdev=833.99 00:13:52.904 clat percentiles (usec): 00:13:52.904 | 50.000th=[ 1582], 99.000th=[ 4228], 99.900th=[ 5735], 99.990th=[ 7177], 00:13:52.904 | 99.999th=[ 7373] 00:13:52.904 write: IOPS=12.2k, BW=47.8MiB/s (50.1MB/s)(478MiB/10002msec); 0 zone resets 00:13:52.904 slat (usec): min=6, max=4201, avg=43.55, stdev=162.97 00:13:52.904 clat (usec): min=65, max=9380, avg=1935.72, stdev=952.30 00:13:52.904 lat (usec): min=97, max=9407, avg=1979.28, stdev=966.32 00:13:52.904 clat percentiles (usec): 00:13:52.904 | 50.000th=[ 1795], 99.000th=[ 4883], 99.900th=[ 6521], 99.990th=[ 7767], 00:13:52.904 | 99.999th=[ 9372] 00:13:52.904 bw ( KiB/s): min=38217, max=67291, per=100.00%, avg=49049.53, stdev=1044.79, samples=114 00:13:52.904 iops : min= 9553, max=16822, avg=12261.05, stdev=261.20, samples=114 00:13:52.904 lat (usec) : 100=0.01%, 250=0.55%, 500=2.34%, 750=5.45%, 1000=8.53% 00:13:52.904 lat (msec) : 2=47.93%, 4=32.69%, 10=2.52% 00:13:52.904 cpu : usr=45.10%, sys=30.36%, ctx=4862, majf=0, minf=12965 00:13:52.904 IO depths : 1=11.3%, 2=23.6%, 4=51.3%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:13:52.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.904 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:52.904 issued rwts: total=118863,122430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:52.904 latency : target=0, window=0, percentile=100.00%, depth=8 00:13:52.904 00:13:52.904 Run status group 0 (all jobs): 00:13:52.904 READ: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=464MiB (487MB), run=10002-10002msec 00:13:52.904 WRITE: bw=47.8MiB/s (50.1MB/s), 47.8MiB/s-47.8MiB/s (50.1MB/s-50.1MB/s), io=478MiB (501MB), run=10002-10002msec 00:13:52.904 ----------------------------------------------------- 00:13:52.904 Suppressions used: 00:13:52.904 count bytes template 00:13:52.904 6 48 /usr/src/fio/parse.c 00:13:52.904 3491 335136 /usr/src/fio/iolog.c 00:13:52.904 1 8 libtcmalloc_minimal.so 00:13:52.904 1 904 libcrypto.so 00:13:52.904 ----------------------------------------------------- 00:13:52.904 00:13:52.904 ************************************ 00:13:52.904 END TEST bdev_fio_rw_verify 00:13:52.904 ************************************ 00:13:52.904 00:13:52.904 real 0m11.891s 00:13:52.904 user 0m28.501s 00:13:52.904 sys 0m18.535s 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "7ae42eb6-06ea-4e8e-a73c-1e12522f90ad"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "7ae42eb6-06ea-4e8e-a73c-1e12522f90ad",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "a7fd4912-f97f-41d1-a5e3-3234eb290bc1"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "a7fd4912-f97f-41d1-a5e3-3234eb290bc1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "dc212266-ff72-4c2d-96a7-72c572802309"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "dc212266-ff72-4c2d-96a7-72c572802309",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "e752fde3-69fa-492c-9b4f-98d78c50eba9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "e752fde3-69fa-492c-9b4f-98d78c50eba9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "cc35eb9d-1f71-4de9-bc66-15215ce0dd55"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "cc35eb9d-1f71-4de9-bc66-15215ce0dd55",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "a678c095-b7c3-4224-8380-96fa0271a616"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "a678c095-b7c3-4224-8380-96fa0271a616",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:13:52.904 /home/vagrant/spdk_repo/spdk 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:13:52.904 00:13:52.904 real 0m12.066s 00:13:52.904 user 0m28.576s 00:13:52.904 sys 0m18.612s 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:52.904 03:40:44 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:13:52.904 ************************************ 00:13:52.904 END TEST bdev_fio 00:13:52.904 ************************************ 00:13:52.904 03:40:44 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:13:52.904 03:40:44 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:52.904 03:40:44 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:52.904 03:40:44 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:52.904 03:40:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:52.904 ************************************ 00:13:52.904 START TEST bdev_verify 00:13:52.905 ************************************ 00:13:52.905 03:40:44 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:13:52.905 [2024-10-01 03:40:45.066276] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:52.905 [2024-10-01 03:40:45.066419] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70726 ] 00:13:52.905 [2024-10-01 03:40:45.220727] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:53.166 [2024-10-01 03:40:45.498596] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.166 [2024-10-01 03:40:45.498724] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.427 Running I/O for 5 seconds... 00:13:58.592 25088.00 IOPS, 98.00 MiB/s 24528.00 IOPS, 95.81 MiB/s 24173.00 IOPS, 94.43 MiB/s 23921.75 IOPS, 93.44 MiB/s 00:13:58.592 Latency(us) 00:13:58.592 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:58.592 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.592 Verification LBA range: start 0x0 length 0xa0000 00:13:58.592 nvme0n1 : 5.04 1904.72 7.44 0.00 0.00 67084.18 10384.94 65737.65 00:13:58.592 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.592 Verification LBA range: start 0xa0000 length 0xa0000 00:13:58.592 nvme0n1 : 5.05 1749.04 6.83 0.00 0.00 73019.61 9931.22 68964.04 00:13:58.592 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.592 Verification LBA range: start 0x0 length 0xbd0bd 00:13:58.592 nvme1n1 : 5.03 2451.18 9.57 0.00 0.00 52028.17 6427.57 64124.46 00:13:58.592 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.592 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:13:58.592 nvme1n1 : 5.07 2362.02 9.23 0.00 0.00 53792.28 6604.01 64527.75 00:13:58.592 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.592 Verification LBA range: start 0x0 length 0x80000 00:13:58.592 nvme2n1 : 5.05 1950.65 7.62 0.00 0.00 65287.94 8620.50 72190.42 00:13:58.592 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.592 Verification LBA range: start 0x80000 length 0x80000 00:13:58.592 nvme2n1 : 5.05 1798.69 7.03 0.00 0.00 70726.61 10082.46 62914.56 00:13:58.592 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.592 Verification LBA range: start 0x0 length 0x80000 00:13:58.593 nvme2n2 : 5.07 1920.37 7.50 0.00 0.00 66154.30 5293.29 72190.42 00:13:58.593 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.593 Verification LBA range: start 0x80000 length 0x80000 00:13:58.593 nvme2n2 : 5.07 1767.02 6.90 0.00 0.00 71855.26 12250.19 62914.56 00:13:58.593 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.593 Verification LBA range: start 0x0 length 0x80000 00:13:58.593 nvme2n3 : 5.06 1921.43 7.51 0.00 0.00 66008.11 4310.25 70980.53 00:13:58.593 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.593 Verification LBA range: start 0x80000 length 0x80000 00:13:58.593 nvme2n3 : 5.07 1766.50 6.90 0.00 0.00 71699.70 9427.10 67754.14 00:13:58.593 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:13:58.593 Verification LBA range: start 0x0 length 0x20000 00:13:58.593 nvme3n1 : 5.07 1919.76 7.50 0.00 0.00 65963.67 5923.45 66544.25 00:13:58.593 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:13:58.593 Verification LBA range: start 0x20000 length 0x20000 00:13:58.593 nvme3n1 : 5.08 1764.92 6.89 0.00 0.00 71659.00 2848.30 69770.63 00:13:58.593 =================================================================================================================== 00:13:58.593 Total : 23276.29 90.92 0.00 0.00 65537.67 2848.30 72190.42 00:13:59.979 00:13:59.979 real 0m7.108s 00:13:59.979 user 0m11.270s 00:13:59.979 sys 0m1.484s 00:13:59.979 03:40:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:59.979 ************************************ 00:13:59.979 END TEST bdev_verify 00:13:59.979 ************************************ 00:13:59.979 03:40:52 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:13:59.979 03:40:52 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:59.979 03:40:52 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:13:59.979 03:40:52 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:59.979 03:40:52 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:59.979 ************************************ 00:13:59.979 START TEST bdev_verify_big_io 00:13:59.979 ************************************ 00:13:59.979 03:40:52 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:13:59.979 [2024-10-01 03:40:52.256080] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:13:59.979 [2024-10-01 03:40:52.256233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70825 ] 00:13:59.979 [2024-10-01 03:40:52.415100] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:00.241 [2024-10-01 03:40:52.687148] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.241 [2024-10-01 03:40:52.687307] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.813 Running I/O for 5 seconds... 00:14:06.981 1472.00 IOPS, 92.00 MiB/s 2636.00 IOPS, 164.75 MiB/s 3193.67 IOPS, 199.60 MiB/s 00:14:06.981 Latency(us) 00:14:06.981 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.981 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0x0 length 0xa000 00:14:06.981 nvme0n1 : 5.94 156.20 9.76 0.00 0.00 783781.77 29642.44 1116330.14 00:14:06.981 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0xa000 length 0xa000 00:14:06.981 nvme0n1 : 5.68 135.16 8.45 0.00 0.00 915966.42 174224.94 1096971.82 00:14:06.981 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0x0 length 0xbd0b 00:14:06.981 nvme1n1 : 5.78 107.42 6.71 0.00 0.00 1117384.60 5318.50 1768060.46 00:14:06.981 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0xbd0b length 0xbd0b 00:14:06.981 nvme1n1 : 5.94 140.07 8.75 0.00 0.00 856371.41 7763.50 929199.66 00:14:06.981 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0x0 length 0x8000 00:14:06.981 nvme2n1 : 5.79 96.77 6.05 0.00 0.00 1197298.07 97598.23 2258471.38 00:14:06.981 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0x8000 length 0x8000 00:14:06.981 nvme2n1 : 5.84 131.47 8.22 0.00 0.00 884553.26 99211.42 1161499.57 00:14:06.981 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0x0 length 0x8000 00:14:06.981 nvme2n2 : 5.95 139.74 8.73 0.00 0.00 799437.19 7511.43 1064707.94 00:14:06.981 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0x8000 length 0x8000 00:14:06.981 nvme2n2 : 5.84 153.30 9.58 0.00 0.00 732462.47 10435.35 871124.68 00:14:06.981 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0x0 length 0x8000 00:14:06.981 nvme2n3 : 6.02 114.24 7.14 0.00 0.00 953442.70 64527.75 2361715.79 00:14:06.981 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0x8000 length 0x8000 00:14:06.981 nvme2n3 : 6.00 95.93 6.00 0.00 0.00 1133489.93 68560.74 1858399.31 00:14:06.981 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0x0 length 0x2000 00:14:06.981 nvme3n1 : 6.03 159.29 9.96 0.00 0.00 662450.54 7813.91 1155046.79 00:14:06.981 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:14:06.981 Verification LBA range: start 0x2000 length 0x2000 00:14:06.981 nvme3n1 : 6.02 202.07 12.63 0.00 0.00 526005.51 4763.96 635598.38 00:14:06.981 =================================================================================================================== 00:14:06.981 Total : 1631.67 101.98 0.00 0.00 839075.63 4763.96 2361715.79 00:14:07.913 00:14:07.913 real 0m8.180s 00:14:07.913 user 0m14.663s 00:14:07.913 sys 0m0.563s 00:14:07.913 03:41:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:07.913 03:41:00 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.913 ************************************ 00:14:07.913 END TEST bdev_verify_big_io 00:14:07.913 ************************************ 00:14:07.913 03:41:00 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.913 03:41:00 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:07.913 03:41:00 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:07.913 03:41:00 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:07.913 ************************************ 00:14:07.913 START TEST bdev_write_zeroes 00:14:07.913 ************************************ 00:14:07.913 03:41:00 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:07.913 [2024-10-01 03:41:00.461502] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:07.913 [2024-10-01 03:41:00.461626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70941 ] 00:14:08.171 [2024-10-01 03:41:00.614696] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:08.429 [2024-10-01 03:41:00.813850] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:08.688 Running I/O for 1 seconds... 00:14:10.064 66944.00 IOPS, 261.50 MiB/s 00:14:10.064 Latency(us) 00:14:10.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:10.064 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:10.064 nvme0n1 : 1.02 9504.30 37.13 0.00 0.00 13456.43 5671.38 26214.40 00:14:10.064 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:10.064 nvme1n1 : 1.03 18586.06 72.60 0.00 0.00 6856.92 3680.10 21374.82 00:14:10.064 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:10.064 nvme2n1 : 1.03 9490.49 37.07 0.00 0.00 13388.71 7864.32 21072.34 00:14:10.064 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:10.064 nvme2n2 : 1.03 9479.28 37.03 0.00 0.00 13395.21 7965.14 21979.77 00:14:10.064 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:10.064 nvme2n3 : 1.03 9468.64 36.99 0.00 0.00 13399.30 8015.56 23492.14 00:14:10.064 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:14:10.064 nvme3n1 : 1.03 9458.01 36.95 0.00 0.00 13405.52 8116.38 25105.33 00:14:10.064 =================================================================================================================== 00:14:10.064 Total : 65986.78 257.76 0.00 0.00 11559.10 3680.10 26214.40 00:14:11.008 00:14:11.008 real 0m2.814s 00:14:11.008 user 0m2.064s 00:14:11.008 sys 0m0.599s 00:14:11.008 03:41:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.008 03:41:03 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:14:11.008 ************************************ 00:14:11.008 END TEST bdev_write_zeroes 00:14:11.008 ************************************ 00:14:11.008 03:41:03 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:11.008 03:41:03 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:11.008 03:41:03 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.008 03:41:03 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.008 ************************************ 00:14:11.008 START TEST bdev_json_nonenclosed 00:14:11.008 ************************************ 00:14:11.008 03:41:03 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:11.008 [2024-10-01 03:41:03.362796] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:11.008 [2024-10-01 03:41:03.362943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70991 ] 00:14:11.008 [2024-10-01 03:41:03.515498] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:11.269 [2024-10-01 03:41:03.775314] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:11.269 [2024-10-01 03:41:03.775442] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:14:11.269 [2024-10-01 03:41:03.775465] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:11.269 [2024-10-01 03:41:03.775478] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:11.840 00:14:11.840 real 0m0.835s 00:14:11.840 user 0m0.590s 00:14:11.840 sys 0m0.137s 00:14:11.840 03:41:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:11.840 ************************************ 00:14:11.840 END TEST bdev_json_nonenclosed 00:14:11.840 ************************************ 00:14:11.840 03:41:04 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:14:11.840 03:41:04 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:11.840 03:41:04 blockdev_xnvme -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:14:11.840 03:41:04 blockdev_xnvme -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:11.840 03:41:04 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:11.840 ************************************ 00:14:11.840 START TEST bdev_json_nonarray 00:14:11.840 ************************************ 00:14:11.840 03:41:04 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:14:11.840 [2024-10-01 03:41:04.266270] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:11.840 [2024-10-01 03:41:04.266411] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71022 ] 00:14:12.101 [2024-10-01 03:41:04.419092] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:12.363 [2024-10-01 03:41:04.661755] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.363 [2024-10-01 03:41:04.661898] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:14:12.363 [2024-10-01 03:41:04.661920] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:14:12.363 [2024-10-01 03:41:04.661932] app.c:1061:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:12.624 00:14:12.624 real 0m0.816s 00:14:12.624 user 0m0.564s 00:14:12.624 sys 0m0.143s 00:14:12.624 ************************************ 00:14:12.624 END TEST bdev_json_nonarray 00:14:12.624 ************************************ 00:14:12.624 03:41:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:12.624 03:41:05 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:14:12.624 03:41:05 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:14:13.195 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:17.406 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:14:17.406 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:14:17.406 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:14:17.406 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:14:17.406 00:14:17.406 real 0m59.774s 00:14:17.406 user 1m27.610s 00:14:17.406 sys 0m35.070s 00:14:17.406 ************************************ 00:14:17.406 END TEST blockdev_xnvme 00:14:17.406 03:41:09 blockdev_xnvme -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:17.406 03:41:09 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:17.406 ************************************ 00:14:17.406 03:41:09 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:17.406 03:41:09 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:17.406 03:41:09 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.406 03:41:09 -- common/autotest_common.sh@10 -- # set +x 00:14:17.406 ************************************ 00:14:17.406 START TEST ublk 00:14:17.406 ************************************ 00:14:17.406 03:41:09 ublk -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:14:17.406 * Looking for test storage... 00:14:17.406 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:17.406 03:41:09 ublk -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:17.406 03:41:09 ublk -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:17.406 03:41:09 ublk -- common/autotest_common.sh@1681 -- # lcov --version 00:14:17.406 03:41:09 ublk -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:17.406 03:41:09 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.406 03:41:09 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.406 03:41:09 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.406 03:41:09 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.406 03:41:09 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.406 03:41:09 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.406 03:41:09 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.406 03:41:09 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.406 03:41:09 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.406 03:41:09 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.406 03:41:09 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.406 03:41:09 ublk -- scripts/common.sh@344 -- # case "$op" in 00:14:17.406 03:41:09 ublk -- scripts/common.sh@345 -- # : 1 00:14:17.406 03:41:09 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.406 03:41:09 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.406 03:41:09 ublk -- scripts/common.sh@365 -- # decimal 1 00:14:17.406 03:41:09 ublk -- scripts/common.sh@353 -- # local d=1 00:14:17.406 03:41:09 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.406 03:41:09 ublk -- scripts/common.sh@355 -- # echo 1 00:14:17.406 03:41:09 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.406 03:41:09 ublk -- scripts/common.sh@366 -- # decimal 2 00:14:17.406 03:41:09 ublk -- scripts/common.sh@353 -- # local d=2 00:14:17.406 03:41:09 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.406 03:41:09 ublk -- scripts/common.sh@355 -- # echo 2 00:14:17.407 03:41:09 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.407 03:41:09 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.407 03:41:09 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.407 03:41:09 ublk -- scripts/common.sh@368 -- # return 0 00:14:17.407 03:41:09 ublk -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.407 03:41:09 ublk -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.407 --rc genhtml_branch_coverage=1 00:14:17.407 --rc genhtml_function_coverage=1 00:14:17.407 --rc genhtml_legend=1 00:14:17.407 --rc geninfo_all_blocks=1 00:14:17.407 --rc geninfo_unexecuted_blocks=1 00:14:17.407 00:14:17.407 ' 00:14:17.407 03:41:09 ublk -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.407 --rc genhtml_branch_coverage=1 00:14:17.407 --rc genhtml_function_coverage=1 00:14:17.407 --rc genhtml_legend=1 00:14:17.407 --rc geninfo_all_blocks=1 00:14:17.407 --rc geninfo_unexecuted_blocks=1 00:14:17.407 00:14:17.407 ' 00:14:17.407 03:41:09 ublk -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.407 --rc genhtml_branch_coverage=1 00:14:17.407 --rc genhtml_function_coverage=1 00:14:17.407 --rc genhtml_legend=1 00:14:17.407 --rc geninfo_all_blocks=1 00:14:17.407 --rc geninfo_unexecuted_blocks=1 00:14:17.407 00:14:17.407 ' 00:14:17.407 03:41:09 ublk -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:17.407 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.407 --rc genhtml_branch_coverage=1 00:14:17.407 --rc genhtml_function_coverage=1 00:14:17.407 --rc genhtml_legend=1 00:14:17.407 --rc geninfo_all_blocks=1 00:14:17.407 --rc geninfo_unexecuted_blocks=1 00:14:17.407 00:14:17.407 ' 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:17.407 03:41:09 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:17.407 03:41:09 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:17.407 03:41:09 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:17.407 03:41:09 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:17.407 03:41:09 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:17.407 03:41:09 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:17.407 03:41:09 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:17.407 03:41:09 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:14:17.407 03:41:09 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:14:17.407 03:41:09 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:17.407 03:41:09 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:17.407 03:41:09 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:17.407 ************************************ 00:14:17.407 START TEST test_save_ublk_config 00:14:17.407 ************************************ 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- common/autotest_common.sh@1125 -- # test_save_config 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=71317 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 71317 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71317 ']' 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:17.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:17.407 03:41:09 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:17.407 [2024-10-01 03:41:09.734439] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:17.407 [2024-10-01 03:41:09.734595] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71317 ] 00:14:17.407 [2024-10-01 03:41:09.891451] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:17.668 [2024-10-01 03:41:10.202643] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:18.612 03:41:10 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:18.612 03:41:10 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:18.612 03:41:10 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:14:18.612 03:41:10 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:14:18.612 03:41:10 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.612 03:41:10 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:18.612 [2024-10-01 03:41:11.005031] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:18.612 [2024-10-01 03:41:11.005909] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:18.612 malloc0 00:14:18.612 [2024-10-01 03:41:11.077171] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:18.612 [2024-10-01 03:41:11.077271] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:18.612 [2024-10-01 03:41:11.077282] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:18.612 [2024-10-01 03:41:11.077293] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:18.612 [2024-10-01 03:41:11.085064] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:18.612 [2024-10-01 03:41:11.085097] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:18.612 [2024-10-01 03:41:11.093060] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:18.612 [2024-10-01 03:41:11.093186] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:18.612 [2024-10-01 03:41:11.110045] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:18.612 0 00:14:18.612 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.612 03:41:11 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:14:18.612 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.612 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:14:18.873 "subsystems": [ 00:14:18.873 { 00:14:18.873 "subsystem": "fsdev", 00:14:18.873 "config": [ 00:14:18.873 { 00:14:18.873 "method": "fsdev_set_opts", 00:14:18.873 "params": { 00:14:18.873 "fsdev_io_pool_size": 65535, 00:14:18.873 "fsdev_io_cache_size": 256 00:14:18.873 } 00:14:18.873 } 00:14:18.873 ] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "keyring", 00:14:18.873 "config": [] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "iobuf", 00:14:18.873 "config": [ 00:14:18.873 { 00:14:18.873 "method": "iobuf_set_options", 00:14:18.873 "params": { 00:14:18.873 "small_pool_count": 8192, 00:14:18.873 "large_pool_count": 1024, 00:14:18.873 "small_bufsize": 8192, 00:14:18.873 "large_bufsize": 135168 00:14:18.873 } 00:14:18.873 } 00:14:18.873 ] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "sock", 00:14:18.873 "config": [ 00:14:18.873 { 00:14:18.873 "method": "sock_set_default_impl", 00:14:18.873 "params": { 00:14:18.873 "impl_name": "posix" 00:14:18.873 } 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "method": "sock_impl_set_options", 00:14:18.873 "params": { 00:14:18.873 "impl_name": "ssl", 00:14:18.873 "recv_buf_size": 4096, 00:14:18.873 "send_buf_size": 4096, 00:14:18.873 "enable_recv_pipe": true, 00:14:18.873 "enable_quickack": false, 00:14:18.873 "enable_placement_id": 0, 00:14:18.873 "enable_zerocopy_send_server": true, 00:14:18.873 "enable_zerocopy_send_client": false, 00:14:18.873 "zerocopy_threshold": 0, 00:14:18.873 "tls_version": 0, 00:14:18.873 "enable_ktls": false 00:14:18.873 } 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "method": "sock_impl_set_options", 00:14:18.873 "params": { 00:14:18.873 "impl_name": "posix", 00:14:18.873 "recv_buf_size": 2097152, 00:14:18.873 "send_buf_size": 2097152, 00:14:18.873 "enable_recv_pipe": true, 00:14:18.873 "enable_quickack": false, 00:14:18.873 "enable_placement_id": 0, 00:14:18.873 "enable_zerocopy_send_server": true, 00:14:18.873 "enable_zerocopy_send_client": false, 00:14:18.873 "zerocopy_threshold": 0, 00:14:18.873 "tls_version": 0, 00:14:18.873 "enable_ktls": false 00:14:18.873 } 00:14:18.873 } 00:14:18.873 ] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "vmd", 00:14:18.873 "config": [] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "accel", 00:14:18.873 "config": [ 00:14:18.873 { 00:14:18.873 "method": "accel_set_options", 00:14:18.873 "params": { 00:14:18.873 "small_cache_size": 128, 00:14:18.873 "large_cache_size": 16, 00:14:18.873 "task_count": 2048, 00:14:18.873 "sequence_count": 2048, 00:14:18.873 "buf_count": 2048 00:14:18.873 } 00:14:18.873 } 00:14:18.873 ] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "bdev", 00:14:18.873 "config": [ 00:14:18.873 { 00:14:18.873 "method": "bdev_set_options", 00:14:18.873 "params": { 00:14:18.873 "bdev_io_pool_size": 65535, 00:14:18.873 "bdev_io_cache_size": 256, 00:14:18.873 "bdev_auto_examine": true, 00:14:18.873 "iobuf_small_cache_size": 128, 00:14:18.873 "iobuf_large_cache_size": 16 00:14:18.873 } 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "method": "bdev_raid_set_options", 00:14:18.873 "params": { 00:14:18.873 "process_window_size_kb": 1024, 00:14:18.873 "process_max_bandwidth_mb_sec": 0 00:14:18.873 } 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "method": "bdev_iscsi_set_options", 00:14:18.873 "params": { 00:14:18.873 "timeout_sec": 30 00:14:18.873 } 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "method": "bdev_nvme_set_options", 00:14:18.873 "params": { 00:14:18.873 "action_on_timeout": "none", 00:14:18.873 "timeout_us": 0, 00:14:18.873 "timeout_admin_us": 0, 00:14:18.873 "keep_alive_timeout_ms": 10000, 00:14:18.873 "arbitration_burst": 0, 00:14:18.873 "low_priority_weight": 0, 00:14:18.873 "medium_priority_weight": 0, 00:14:18.873 "high_priority_weight": 0, 00:14:18.873 "nvme_adminq_poll_period_us": 10000, 00:14:18.873 "nvme_ioq_poll_period_us": 0, 00:14:18.873 "io_queue_requests": 0, 00:14:18.873 "delay_cmd_submit": true, 00:14:18.873 "transport_retry_count": 4, 00:14:18.873 "bdev_retry_count": 3, 00:14:18.873 "transport_ack_timeout": 0, 00:14:18.873 "ctrlr_loss_timeout_sec": 0, 00:14:18.873 "reconnect_delay_sec": 0, 00:14:18.873 "fast_io_fail_timeout_sec": 0, 00:14:18.873 "disable_auto_failback": false, 00:14:18.873 "generate_uuids": false, 00:14:18.873 "transport_tos": 0, 00:14:18.873 "nvme_error_stat": false, 00:14:18.873 "rdma_srq_size": 0, 00:14:18.873 "io_path_stat": false, 00:14:18.873 "allow_accel_sequence": false, 00:14:18.873 "rdma_max_cq_size": 0, 00:14:18.873 "rdma_cm_event_timeout_ms": 0, 00:14:18.873 "dhchap_digests": [ 00:14:18.873 "sha256", 00:14:18.873 "sha384", 00:14:18.873 "sha512" 00:14:18.873 ], 00:14:18.873 "dhchap_dhgroups": [ 00:14:18.873 "null", 00:14:18.873 "ffdhe2048", 00:14:18.873 "ffdhe3072", 00:14:18.873 "ffdhe4096", 00:14:18.873 "ffdhe6144", 00:14:18.873 "ffdhe8192" 00:14:18.873 ] 00:14:18.873 } 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "method": "bdev_nvme_set_hotplug", 00:14:18.873 "params": { 00:14:18.873 "period_us": 100000, 00:14:18.873 "enable": false 00:14:18.873 } 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "method": "bdev_malloc_create", 00:14:18.873 "params": { 00:14:18.873 "name": "malloc0", 00:14:18.873 "num_blocks": 8192, 00:14:18.873 "block_size": 4096, 00:14:18.873 "physical_block_size": 4096, 00:14:18.873 "uuid": "42179850-11de-41aa-923f-11b6d6bc53f9", 00:14:18.873 "optimal_io_boundary": 0, 00:14:18.873 "md_size": 0, 00:14:18.873 "dif_type": 0, 00:14:18.873 "dif_is_head_of_md": false, 00:14:18.873 "dif_pi_format": 0 00:14:18.873 } 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "method": "bdev_wait_for_examine" 00:14:18.873 } 00:14:18.873 ] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "scsi", 00:14:18.873 "config": null 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "scheduler", 00:14:18.873 "config": [ 00:14:18.873 { 00:14:18.873 "method": "framework_set_scheduler", 00:14:18.873 "params": { 00:14:18.873 "name": "static" 00:14:18.873 } 00:14:18.873 } 00:14:18.873 ] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "vhost_scsi", 00:14:18.873 "config": [] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "vhost_blk", 00:14:18.873 "config": [] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "ublk", 00:14:18.873 "config": [ 00:14:18.873 { 00:14:18.873 "method": "ublk_create_target", 00:14:18.873 "params": { 00:14:18.873 "cpumask": "1" 00:14:18.873 } 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "method": "ublk_start_disk", 00:14:18.873 "params": { 00:14:18.873 "bdev_name": "malloc0", 00:14:18.873 "ublk_id": 0, 00:14:18.873 "num_queues": 1, 00:14:18.873 "queue_depth": 128 00:14:18.873 } 00:14:18.873 } 00:14:18.873 ] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "nbd", 00:14:18.873 "config": [] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "nvmf", 00:14:18.873 "config": [ 00:14:18.873 { 00:14:18.873 "method": "nvmf_set_config", 00:14:18.873 "params": { 00:14:18.873 "discovery_filter": "match_any", 00:14:18.873 "admin_cmd_passthru": { 00:14:18.873 "identify_ctrlr": false 00:14:18.873 }, 00:14:18.873 "dhchap_digests": [ 00:14:18.873 "sha256", 00:14:18.873 "sha384", 00:14:18.873 "sha512" 00:14:18.873 ], 00:14:18.873 "dhchap_dhgroups": [ 00:14:18.873 "null", 00:14:18.873 "ffdhe2048", 00:14:18.873 "ffdhe3072", 00:14:18.873 "ffdhe4096", 00:14:18.873 "ffdhe6144", 00:14:18.873 "ffdhe8192" 00:14:18.873 ] 00:14:18.873 } 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "method": "nvmf_set_max_subsystems", 00:14:18.873 "params": { 00:14:18.873 "max_subsystems": 1024 00:14:18.873 } 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "method": "nvmf_set_crdt", 00:14:18.873 "params": { 00:14:18.873 "crdt1": 0, 00:14:18.873 "crdt2": 0, 00:14:18.873 "crdt3": 0 00:14:18.873 } 00:14:18.873 } 00:14:18.873 ] 00:14:18.873 }, 00:14:18.873 { 00:14:18.873 "subsystem": "iscsi", 00:14:18.873 "config": [ 00:14:18.873 { 00:14:18.873 "method": "iscsi_set_options", 00:14:18.873 "params": { 00:14:18.873 "node_base": "iqn.2016-06.io.spdk", 00:14:18.873 "max_sessions": 128, 00:14:18.873 "max_connections_per_session": 2, 00:14:18.873 "max_queue_depth": 64, 00:14:18.873 "default_time2wait": 2, 00:14:18.873 "default_time2retain": 20, 00:14:18.873 "first_burst_length": 8192, 00:14:18.873 "immediate_data": true, 00:14:18.873 "allow_duplicated_isid": false, 00:14:18.873 "error_recovery_level": 0, 00:14:18.873 "nop_timeout": 60, 00:14:18.873 "nop_in_interval": 30, 00:14:18.873 "disable_chap": false, 00:14:18.873 "require_chap": false, 00:14:18.873 "mutual_chap": false, 00:14:18.873 "chap_group": 0, 00:14:18.873 "max_large_datain_per_connection": 64, 00:14:18.873 "max_r2t_per_connection": 4, 00:14:18.873 "pdu_pool_size": 36864, 00:14:18.873 "immediate_data_pool_size": 16384, 00:14:18.873 "data_out_pool_size": 2048 00:14:18.873 } 00:14:18.873 } 00:14:18.873 ] 00:14:18.873 } 00:14:18.873 ] 00:14:18.873 }' 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 71317 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71317 ']' 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71317 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71317 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:18.873 killing process with pid 71317 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71317' 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71317 00:14:18.873 03:41:11 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71317 00:14:20.781 [2024-10-01 03:41:12.837054] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:20.781 [2024-10-01 03:41:12.874038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:20.781 [2024-10-01 03:41:12.874148] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:20.781 [2024-10-01 03:41:12.885038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:20.781 [2024-10-01 03:41:12.885086] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:20.781 [2024-10-01 03:41:12.885094] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:20.781 [2024-10-01 03:41:12.885118] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:20.781 [2024-10-01 03:41:12.885234] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:21.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.721 03:41:14 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=71383 00:14:21.721 03:41:14 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 71383 00:14:21.721 03:41:14 ublk.test_save_ublk_config -- common/autotest_common.sh@831 -- # '[' -z 71383 ']' 00:14:21.721 03:41:14 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.721 03:41:14 ublk.test_save_ublk_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.721 03:41:14 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.721 03:41:14 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.721 03:41:14 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:14:21.721 03:41:14 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:21.721 03:41:14 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:14:21.721 "subsystems": [ 00:14:21.721 { 00:14:21.721 "subsystem": "fsdev", 00:14:21.721 "config": [ 00:14:21.721 { 00:14:21.721 "method": "fsdev_set_opts", 00:14:21.721 "params": { 00:14:21.721 "fsdev_io_pool_size": 65535, 00:14:21.721 "fsdev_io_cache_size": 256 00:14:21.721 } 00:14:21.721 } 00:14:21.721 ] 00:14:21.721 }, 00:14:21.721 { 00:14:21.721 "subsystem": "keyring", 00:14:21.721 "config": [] 00:14:21.721 }, 00:14:21.721 { 00:14:21.721 "subsystem": "iobuf", 00:14:21.721 "config": [ 00:14:21.721 { 00:14:21.721 "method": "iobuf_set_options", 00:14:21.721 "params": { 00:14:21.721 "small_pool_count": 8192, 00:14:21.721 "large_pool_count": 1024, 00:14:21.721 "small_bufsize": 8192, 00:14:21.721 "large_bufsize": 135168 00:14:21.721 } 00:14:21.721 } 00:14:21.721 ] 00:14:21.721 }, 00:14:21.721 { 00:14:21.721 "subsystem": "sock", 00:14:21.721 "config": [ 00:14:21.721 { 00:14:21.721 "method": "sock_set_default_impl", 00:14:21.721 "params": { 00:14:21.721 "impl_name": "posix" 00:14:21.721 } 00:14:21.721 }, 00:14:21.721 { 00:14:21.721 "method": "sock_impl_set_options", 00:14:21.721 "params": { 00:14:21.721 "impl_name": "ssl", 00:14:21.721 "recv_buf_size": 4096, 00:14:21.721 "send_buf_size": 4096, 00:14:21.721 "enable_recv_pipe": true, 00:14:21.721 "enable_quickack": false, 00:14:21.721 "enable_placement_id": 0, 00:14:21.721 "enable_zerocopy_send_server": true, 00:14:21.721 "enable_zerocopy_send_client": false, 00:14:21.721 "zerocopy_threshold": 0, 00:14:21.721 "tls_version": 0, 00:14:21.721 "enable_ktls": false 00:14:21.721 } 00:14:21.721 }, 00:14:21.721 { 00:14:21.721 "method": "sock_impl_set_options", 00:14:21.721 "params": { 00:14:21.721 "impl_name": "posix", 00:14:21.721 "recv_buf_size": 2097152, 00:14:21.721 "send_buf_size": 2097152, 00:14:21.721 "enable_recv_pipe": true, 00:14:21.721 "enable_quickack": false, 00:14:21.721 "enable_placement_id": 0, 00:14:21.721 "enable_zerocopy_send_server": true, 00:14:21.721 "enable_zerocopy_send_client": false, 00:14:21.721 "zerocopy_threshold": 0, 00:14:21.721 "tls_version": 0, 00:14:21.721 "enable_ktls": false 00:14:21.721 } 00:14:21.721 } 00:14:21.721 ] 00:14:21.721 }, 00:14:21.721 { 00:14:21.721 "subsystem": "vmd", 00:14:21.721 "config": [] 00:14:21.721 }, 00:14:21.721 { 00:14:21.721 "subsystem": "accel", 00:14:21.721 "config": [ 00:14:21.721 { 00:14:21.721 "method": "accel_set_options", 00:14:21.721 "params": { 00:14:21.721 "small_cache_size": 128, 00:14:21.721 "large_cache_size": 16, 00:14:21.721 "task_count": 2048, 00:14:21.721 "sequence_count": 2048, 00:14:21.721 "buf_count": 2048 00:14:21.721 } 00:14:21.721 } 00:14:21.721 ] 00:14:21.721 }, 00:14:21.721 { 00:14:21.721 "subsystem": "bdev", 00:14:21.721 "config": [ 00:14:21.721 { 00:14:21.721 "method": "bdev_set_options", 00:14:21.721 "params": { 00:14:21.721 "bdev_io_pool_size": 65535, 00:14:21.721 "bdev_io_cache_size": 256, 00:14:21.721 "bdev_auto_examine": true, 00:14:21.721 "iobuf_small_cache_size": 128, 00:14:21.721 "iobuf_large_cache_size": 16 00:14:21.721 } 00:14:21.721 }, 00:14:21.721 { 00:14:21.721 "method": "bdev_raid_set_options", 00:14:21.721 "params": { 00:14:21.721 "process_window_size_kb": 1024, 00:14:21.721 "process_max_bandwidth_mb_sec": 0 00:14:21.721 } 00:14:21.721 }, 00:14:21.721 { 00:14:21.721 "method": "bdev_iscsi_set_options", 00:14:21.721 "params": { 00:14:21.721 "timeout_sec": 30 00:14:21.722 } 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "method": "bdev_nvme_set_options", 00:14:21.722 "params": { 00:14:21.722 "action_on_timeout": "none", 00:14:21.722 "timeout_us": 0, 00:14:21.722 "timeout_admin_us": 0, 00:14:21.722 "keep_alive_timeout_ms": 10000, 00:14:21.722 "arbitration_burst": 0, 00:14:21.722 "low_priority_weight": 0, 00:14:21.722 "medium_priority_weight": 0, 00:14:21.722 "high_priority_weight": 0, 00:14:21.722 "nvme_adminq_poll_period_us": 10000, 00:14:21.722 "nvme_ioq_poll_period_us": 0, 00:14:21.722 "io_queue_requests": 0, 00:14:21.722 "delay_cmd_submit": true, 00:14:21.722 "transport_retry_count": 4, 00:14:21.722 "bdev_retry_count": 3, 00:14:21.722 "transport_ack_timeout": 0, 00:14:21.722 "ctrlr_loss_timeout_sec": 0, 00:14:21.722 "reconnect_delay_sec": 0, 00:14:21.722 "fast_io_fail_timeout_sec": 0, 00:14:21.722 "disable_auto_failback": false, 00:14:21.722 "generate_uuids": false, 00:14:21.722 "transport_tos": 0, 00:14:21.722 "nvme_error_stat": false, 00:14:21.722 "rdma_srq_size": 0, 00:14:21.722 "io_path_stat": false, 00:14:21.722 "allow_accel_sequence": false, 00:14:21.722 "rdma_max_cq_size": 0, 00:14:21.722 "rdma_cm_event_timeout_ms": 0, 00:14:21.722 "dhchap_digests": [ 00:14:21.722 "sha256", 00:14:21.722 "sha384", 00:14:21.722 "sha512" 00:14:21.722 ], 00:14:21.722 "dhchap_dhgroups": [ 00:14:21.722 "null", 00:14:21.722 "ffdhe2048", 00:14:21.722 "ffdhe3072", 00:14:21.722 "ffdhe4096", 00:14:21.722 "ffdhe6144", 00:14:21.722 "ffdhe8192" 00:14:21.722 ] 00:14:21.722 } 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "method": "bdev_nvme_set_hotplug", 00:14:21.722 "params": { 00:14:21.722 "period_us": 100000, 00:14:21.722 "enable": false 00:14:21.722 } 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "method": "bdev_malloc_create", 00:14:21.722 "params": { 00:14:21.722 "name": "malloc0", 00:14:21.722 "num_blocks": 8192, 00:14:21.722 "block_size": 4096, 00:14:21.722 "physical_block_size": 4096, 00:14:21.722 "uuid": "42179850-11de-41aa-923f-11b6d6bc53f9", 00:14:21.722 "optimal_io_boundary": 0, 00:14:21.722 "md_size": 0, 00:14:21.722 "dif_type": 0, 00:14:21.722 "dif_is_head_of_md": false, 00:14:21.722 "dif_pi_format": 0 00:14:21.722 } 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "method": "bdev_wait_for_examine" 00:14:21.722 } 00:14:21.722 ] 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "subsystem": "scsi", 00:14:21.722 "config": null 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "subsystem": "scheduler", 00:14:21.722 "config": [ 00:14:21.722 { 00:14:21.722 "method": "framework_set_scheduler", 00:14:21.722 "params": { 00:14:21.722 "name": "static" 00:14:21.722 } 00:14:21.722 } 00:14:21.722 ] 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "subsystem": "vhost_scsi", 00:14:21.722 "config": [] 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "subsystem": "vhost_blk", 00:14:21.722 "config": [] 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "subsystem": "ublk", 00:14:21.722 "config": [ 00:14:21.722 { 00:14:21.722 "method": "ublk_create_target", 00:14:21.722 "params": { 00:14:21.722 "cpumask": "1" 00:14:21.722 } 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "method": "ublk_start_disk", 00:14:21.722 "params": { 00:14:21.722 "bdev_name": "malloc0", 00:14:21.722 "ublk_id": 0, 00:14:21.722 "num_queues": 1, 00:14:21.722 "queue_depth": 128 00:14:21.722 } 00:14:21.722 } 00:14:21.722 ] 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "subsystem": "nbd", 00:14:21.722 "config": [] 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "subsystem": "nvmf", 00:14:21.722 "config": [ 00:14:21.722 { 00:14:21.722 "method": "nvmf_set_config", 00:14:21.722 "params": { 00:14:21.722 "discovery_filter": "match_any", 00:14:21.722 "admin_cmd_passthru": { 00:14:21.722 "identify_ctrlr": false 00:14:21.722 }, 00:14:21.722 "dhchap_digests": [ 00:14:21.722 "sha256", 00:14:21.722 "sha384", 00:14:21.722 "sha512" 00:14:21.722 ], 00:14:21.722 "dhchap_dhgroups": [ 00:14:21.722 "null", 00:14:21.722 "ffdhe2048", 00:14:21.722 "ffdhe3072", 00:14:21.722 "ffdhe4096", 00:14:21.722 "ffdhe6144", 00:14:21.722 "ffdhe8192" 00:14:21.722 ] 00:14:21.722 } 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "method": "nvmf_set_max_subsystems", 00:14:21.722 "params": { 00:14:21.722 "max_subsystems": 1024 00:14:21.722 } 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "method": "nvmf_set_crdt", 00:14:21.722 "params": { 00:14:21.722 "crdt1": 0, 00:14:21.722 "crdt2": 0, 00:14:21.722 "crdt3": 0 00:14:21.722 } 00:14:21.722 } 00:14:21.722 ] 00:14:21.722 }, 00:14:21.722 { 00:14:21.722 "subsystem": "iscsi", 00:14:21.722 "config": [ 00:14:21.722 { 00:14:21.722 "method": "iscsi_set_options", 00:14:21.722 "params": { 00:14:21.722 "node_base": "iqn.2016-06.io.spdk", 00:14:21.722 "max_sessions": 128, 00:14:21.722 "max_connections_per_session": 2, 00:14:21.722 "max_queue_depth": 64, 00:14:21.722 "default_time2wait": 2, 00:14:21.722 "default_time2retain": 20, 00:14:21.722 "first_burst_length": 8192, 00:14:21.722 "immediate_data": true, 00:14:21.722 "allow_duplicated_isid": false, 00:14:21.722 "error_recovery_level": 0, 00:14:21.722 "nop_timeout": 60, 00:14:21.722 "nop_in_interval": 30, 00:14:21.722 "disable_chap": false, 00:14:21.722 "require_chap": false, 00:14:21.722 "mutual_chap": false, 00:14:21.722 "chap_group": 0, 00:14:21.722 "max_large_datain_per_connection": 64, 00:14:21.722 "max_r2t_per_connection": 4, 00:14:21.722 "pdu_pool_size": 36864, 00:14:21.722 "immediate_data_pool_size": 16384, 00:14:21.722 "data_out_pool_size": 2048 00:14:21.722 } 00:14:21.722 } 00:14:21.722 ] 00:14:21.722 } 00:14:21.722 ] 00:14:21.722 }' 00:14:21.982 [2024-10-01 03:41:14.278546] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:21.982 [2024-10-01 03:41:14.278823] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71383 ] 00:14:21.982 [2024-10-01 03:41:14.425792] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.242 [2024-10-01 03:41:14.613349] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.812 [2024-10-01 03:41:15.301020] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:22.812 [2024-10-01 03:41:15.301705] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:22.812 [2024-10-01 03:41:15.309111] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:14:22.812 [2024-10-01 03:41:15.309175] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:14:22.812 [2024-10-01 03:41:15.309181] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:22.812 [2024-10-01 03:41:15.309187] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:22.812 [2024-10-01 03:41:15.318092] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:22.812 [2024-10-01 03:41:15.318110] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:22.812 [2024-10-01 03:41:15.325024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:22.812 [2024-10-01 03:41:15.325105] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:22.812 [2024-10-01 03:41:15.342023] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # return 0 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 71383 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@950 -- # '[' -z 71383 ']' 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # kill -0 71383 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # uname 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71383 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:23.073 killing process with pid 71383 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71383' 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@969 -- # kill 71383 00:14:23.073 03:41:15 ublk.test_save_ublk_config -- common/autotest_common.sh@974 -- # wait 71383 00:14:24.014 [2024-10-01 03:41:16.462338] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:24.014 [2024-10-01 03:41:16.498042] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:24.014 [2024-10-01 03:41:16.498150] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:24.014 [2024-10-01 03:41:16.501280] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:24.014 [2024-10-01 03:41:16.501322] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:24.014 [2024-10-01 03:41:16.501329] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:24.014 [2024-10-01 03:41:16.501354] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:24.014 [2024-10-01 03:41:16.501467] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:25.402 03:41:17 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:14:25.402 00:14:25.402 real 0m8.170s 00:14:25.402 user 0m5.377s 00:14:25.402 sys 0m3.424s 00:14:25.402 03:41:17 ublk.test_save_ublk_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:25.402 ************************************ 00:14:25.402 END TEST test_save_ublk_config 00:14:25.402 ************************************ 00:14:25.402 03:41:17 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:14:25.402 03:41:17 ublk -- ublk/ublk.sh@139 -- # spdk_pid=71455 00:14:25.402 03:41:17 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:25.402 03:41:17 ublk -- ublk/ublk.sh@141 -- # waitforlisten 71455 00:14:25.402 03:41:17 ublk -- common/autotest_common.sh@831 -- # '[' -z 71455 ']' 00:14:25.402 03:41:17 ublk -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:25.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:25.402 03:41:17 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:25.402 03:41:17 ublk -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:25.402 03:41:17 ublk -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:25.402 03:41:17 ublk -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:25.402 03:41:17 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:25.402 [2024-10-01 03:41:17.941491] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:25.402 [2024-10-01 03:41:17.941622] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71455 ] 00:14:25.662 [2024-10-01 03:41:18.092778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:25.923 [2024-10-01 03:41:18.362207] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.923 [2024-10-01 03:41:18.362295] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:26.868 03:41:19 ublk -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:26.868 03:41:19 ublk -- common/autotest_common.sh@864 -- # return 0 00:14:26.868 03:41:19 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:14:26.868 03:41:19 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:26.868 03:41:19 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:26.868 03:41:19 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.868 ************************************ 00:14:26.868 START TEST test_create_ublk 00:14:26.868 ************************************ 00:14:26.868 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@1125 -- # test_create_ublk 00:14:26.868 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:14:26.868 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.868 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:26.868 [2024-10-01 03:41:19.193043] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:26.868 [2024-10-01 03:41:19.194870] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:26.868 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.868 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:14:26.868 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:14:26.868 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.868 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.130 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:27.130 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.130 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.130 [2024-10-01 03:41:19.457240] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:27.130 [2024-10-01 03:41:19.457737] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:27.130 [2024-10-01 03:41:19.457784] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:27.130 [2024-10-01 03:41:19.457792] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:27.130 [2024-10-01 03:41:19.466437] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:27.130 [2024-10-01 03:41:19.466472] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:27.130 [2024-10-01 03:41:19.473070] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:27.130 [2024-10-01 03:41:19.473853] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:27.130 [2024-10-01 03:41:19.489090] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:27.130 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:14:27.130 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.130 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:27.130 03:41:19 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:14:27.130 { 00:14:27.130 "ublk_device": "/dev/ublkb0", 00:14:27.130 "id": 0, 00:14:27.130 "queue_depth": 512, 00:14:27.130 "num_queues": 4, 00:14:27.130 "bdev_name": "Malloc0" 00:14:27.130 } 00:14:27.130 ]' 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:14:27.130 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:14:27.392 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:27.392 03:41:19 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:14:27.392 03:41:19 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:14:27.392 03:41:19 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:14:27.392 03:41:19 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:14:27.392 03:41:19 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:14:27.392 03:41:19 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:14:27.392 03:41:19 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:14:27.392 03:41:19 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:14:27.392 03:41:19 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:14:27.392 03:41:19 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:27.392 03:41:19 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:14:27.392 03:41:19 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:14:27.392 fio: verification read phase will never start because write phase uses all of runtime 00:14:27.392 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:14:27.392 fio-3.35 00:14:27.392 Starting 1 process 00:14:37.373 00:14:37.373 fio_test: (groupid=0, jobs=1): err= 0: pid=71505: Tue Oct 1 03:41:29 2024 00:14:37.373 write: IOPS=17.6k, BW=68.6MiB/s (71.9MB/s)(686MiB/10001msec); 0 zone resets 00:14:37.373 clat (usec): min=32, max=3958, avg=56.19, stdev=88.16 00:14:37.373 lat (usec): min=32, max=3959, avg=56.64, stdev=88.20 00:14:37.373 clat percentiles (usec): 00:14:37.373 | 1.00th=[ 34], 5.00th=[ 35], 10.00th=[ 35], 20.00th=[ 48], 00:14:37.373 | 30.00th=[ 50], 40.00th=[ 52], 50.00th=[ 53], 60.00th=[ 55], 00:14:37.373 | 70.00th=[ 57], 80.00th=[ 59], 90.00th=[ 63], 95.00th=[ 67], 00:14:37.373 | 99.00th=[ 84], 99.50th=[ 215], 99.90th=[ 1631], 99.95th=[ 2540], 00:14:37.373 | 99.99th=[ 3425] 00:14:37.373 bw ( KiB/s): min=55280, max=86160, per=100.00%, avg=70420.21, stdev=8344.20, samples=19 00:14:37.373 iops : min=13820, max=21540, avg=17605.05, stdev=2086.05, samples=19 00:14:37.373 lat (usec) : 50=31.35%, 100=67.89%, 250=0.39%, 500=0.20%, 750=0.01% 00:14:37.373 lat (usec) : 1000=0.02% 00:14:37.373 lat (msec) : 2=0.06%, 4=0.08% 00:14:37.373 cpu : usr=2.42%, sys=13.27%, ctx=175531, majf=0, minf=796 00:14:37.373 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:37.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:37.373 issued rwts: total=0,175531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:37.373 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:37.373 00:14:37.373 Run status group 0 (all jobs): 00:14:37.373 WRITE: bw=68.6MiB/s (71.9MB/s), 68.6MiB/s-68.6MiB/s (71.9MB/s-71.9MB/s), io=686MiB (719MB), run=10001-10001msec 00:14:37.373 00:14:37.373 Disk stats (read/write): 00:14:37.373 ublkb0: ios=0/173708, merge=0/0, ticks=0/8231, in_queue=8231, util=99.02% 00:14:37.633 03:41:29 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:37.633 [2024-10-01 03:41:29.929247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:37.633 [2024-10-01 03:41:29.962518] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:37.633 [2024-10-01 03:41:29.963337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:37.633 [2024-10-01 03:41:29.970058] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:37.633 [2024-10-01 03:41:29.974268] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:37.633 [2024-10-01 03:41:29.974284] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.633 03:41:29 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:37.633 [2024-10-01 03:41:29.983085] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:14:37.633 request: 00:14:37.633 { 00:14:37.633 "ublk_id": 0, 00:14:37.633 "method": "ublk_stop_disk", 00:14:37.633 "req_id": 1 00:14:37.633 } 00:14:37.633 Got JSON-RPC error response 00:14:37.633 response: 00:14:37.633 { 00:14:37.633 "code": -19, 00:14:37.633 "message": "No such device" 00:14:37.633 } 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:37.633 03:41:29 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.633 03:41:29 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:37.633 [2024-10-01 03:41:29.993084] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:37.633 [2024-10-01 03:41:29.994995] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:37.633 [2024-10-01 03:41:29.995034] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:37.633 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.633 03:41:30 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:37.633 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.633 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:37.892 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.892 03:41:30 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:14:37.892 03:41:30 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:37.892 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.892 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:37.892 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.892 03:41:30 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:37.892 03:41:30 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:14:37.892 03:41:30 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:37.892 03:41:30 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:37.892 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.892 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:38.153 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.153 03:41:30 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:38.153 03:41:30 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:14:38.153 ************************************ 00:14:38.153 END TEST test_create_ublk 00:14:38.153 ************************************ 00:14:38.153 03:41:30 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:38.153 00:14:38.153 real 0m11.308s 00:14:38.153 user 0m0.555s 00:14:38.153 sys 0m1.419s 00:14:38.153 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:38.153 03:41:30 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:38.153 03:41:30 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:14:38.153 03:41:30 ublk -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:38.153 03:41:30 ublk -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:38.153 03:41:30 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:38.153 ************************************ 00:14:38.153 START TEST test_create_multi_ublk 00:14:38.153 ************************************ 00:14:38.153 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@1125 -- # test_create_multi_ublk 00:14:38.153 03:41:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:14:38.153 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.153 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:38.153 [2024-10-01 03:41:30.556023] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:38.153 [2024-10-01 03:41:30.557354] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:38.153 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.153 03:41:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:14:38.154 03:41:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:14:38.154 03:41:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:38.154 03:41:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:14:38.154 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.154 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:38.415 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.415 03:41:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:14:38.415 03:41:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:14:38.415 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.415 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:38.415 [2024-10-01 03:41:30.808139] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:14:38.415 [2024-10-01 03:41:30.808464] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:14:38.415 [2024-10-01 03:41:30.808472] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:14:38.415 [2024-10-01 03:41:30.808481] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:14:38.415 [2024-10-01 03:41:30.832035] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:38.415 [2024-10-01 03:41:30.832058] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:38.415 [2024-10-01 03:41:30.844023] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:38.415 [2024-10-01 03:41:30.844561] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:14:38.415 [2024-10-01 03:41:30.884025] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:14:38.415 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.415 03:41:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:14:38.415 03:41:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:38.415 03:41:30 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:14:38.415 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.415 03:41:30 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:38.676 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.676 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:14:38.676 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:14:38.676 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.676 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:38.676 [2024-10-01 03:41:31.132127] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:14:38.676 [2024-10-01 03:41:31.132442] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:14:38.676 [2024-10-01 03:41:31.132451] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:38.676 [2024-10-01 03:41:31.132464] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:38.676 [2024-10-01 03:41:31.140033] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:38.676 [2024-10-01 03:41:31.140050] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:38.676 [2024-10-01 03:41:31.148027] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:38.676 [2024-10-01 03:41:31.148561] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:38.676 [2024-10-01 03:41:31.157024] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:38.676 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.676 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:14:38.676 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:38.676 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:14:38.676 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.676 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:38.937 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.937 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:14:38.937 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:14:38.937 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.937 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:38.937 [2024-10-01 03:41:31.340129] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:14:38.937 [2024-10-01 03:41:31.340444] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:14:38.937 [2024-10-01 03:41:31.340455] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:14:38.937 [2024-10-01 03:41:31.340462] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:14:38.937 [2024-10-01 03:41:31.348032] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:38.937 [2024-10-01 03:41:31.348052] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:38.937 [2024-10-01 03:41:31.356022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:38.937 [2024-10-01 03:41:31.356564] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:14:38.937 [2024-10-01 03:41:31.359858] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:14:38.937 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.937 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:14:38.937 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:38.937 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:14:38.937 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.937 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:39.198 [2024-10-01 03:41:31.528119] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:14:39.198 [2024-10-01 03:41:31.528427] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:14:39.198 [2024-10-01 03:41:31.528435] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:14:39.198 [2024-10-01 03:41:31.528441] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:14:39.198 [2024-10-01 03:41:31.537226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:39.198 [2024-10-01 03:41:31.537243] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:39.198 [2024-10-01 03:41:31.544027] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:39.198 [2024-10-01 03:41:31.544558] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:14:39.198 [2024-10-01 03:41:31.553054] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:14:39.198 { 00:14:39.198 "ublk_device": "/dev/ublkb0", 00:14:39.198 "id": 0, 00:14:39.198 "queue_depth": 512, 00:14:39.198 "num_queues": 4, 00:14:39.198 "bdev_name": "Malloc0" 00:14:39.198 }, 00:14:39.198 { 00:14:39.198 "ublk_device": "/dev/ublkb1", 00:14:39.198 "id": 1, 00:14:39.198 "queue_depth": 512, 00:14:39.198 "num_queues": 4, 00:14:39.198 "bdev_name": "Malloc1" 00:14:39.198 }, 00:14:39.198 { 00:14:39.198 "ublk_device": "/dev/ublkb2", 00:14:39.198 "id": 2, 00:14:39.198 "queue_depth": 512, 00:14:39.198 "num_queues": 4, 00:14:39.198 "bdev_name": "Malloc2" 00:14:39.198 }, 00:14:39.198 { 00:14:39.198 "ublk_device": "/dev/ublkb3", 00:14:39.198 "id": 3, 00:14:39.198 "queue_depth": 512, 00:14:39.198 "num_queues": 4, 00:14:39.198 "bdev_name": "Malloc3" 00:14:39.198 } 00:14:39.198 ]' 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:14:39.198 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:39.199 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:39.460 03:41:31 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:14:39.719 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:39.720 [2024-10-01 03:41:32.208090] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:14:39.720 [2024-10-01 03:41:32.248570] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:39.720 [2024-10-01 03:41:32.249468] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:14:39.720 [2024-10-01 03:41:32.259028] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:39.720 [2024-10-01 03:41:32.259263] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:14:39.720 [2024-10-01 03:41:32.259272] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.720 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:39.981 [2024-10-01 03:41:32.275078] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:14:39.981 [2024-10-01 03:41:32.309556] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:39.981 [2024-10-01 03:41:32.310446] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:14:39.981 [2024-10-01 03:41:32.319029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:39.981 [2024-10-01 03:41:32.319252] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:14:39.981 [2024-10-01 03:41:32.319260] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:39.981 [2024-10-01 03:41:32.335079] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:14:39.981 [2024-10-01 03:41:32.371570] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:39.981 [2024-10-01 03:41:32.372413] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:14:39.981 [2024-10-01 03:41:32.378033] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:39.981 [2024-10-01 03:41:32.378247] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:14:39.981 [2024-10-01 03:41:32.378254] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:39.981 [2024-10-01 03:41:32.394093] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:14:39.981 [2024-10-01 03:41:32.427477] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:14:39.981 [2024-10-01 03:41:32.428319] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:14:39.981 [2024-10-01 03:41:32.430570] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:14:39.981 [2024-10-01 03:41:32.430777] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:14:39.981 [2024-10-01 03:41:32.430785] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.981 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:14:40.241 [2024-10-01 03:41:32.625080] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:40.241 [2024-10-01 03:41:32.626950] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:40.241 [2024-10-01 03:41:32.626977] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:14:40.241 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:14:40.241 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.241 03:41:32 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:14:40.241 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.241 03:41:32 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:40.530 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:40.530 03:41:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:40.530 03:41:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:14:40.530 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:40.530 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:41.096 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.096 03:41:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:41.096 03:41:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:14:41.096 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.096 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:41.096 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.096 03:41:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:14:41.096 03:41:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:14:41.096 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.096 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:14:41.355 00:14:41.355 real 0m3.337s 00:14:41.355 user 0m0.800s 00:14:41.355 sys 0m0.148s 00:14:41.355 ************************************ 00:14:41.355 END TEST test_create_multi_ublk 00:14:41.355 ************************************ 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:41.355 03:41:33 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:14:41.614 03:41:33 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:14:41.614 03:41:33 ublk -- ublk/ublk.sh@147 -- # cleanup 00:14:41.614 03:41:33 ublk -- ublk/ublk.sh@130 -- # killprocess 71455 00:14:41.614 03:41:33 ublk -- common/autotest_common.sh@950 -- # '[' -z 71455 ']' 00:14:41.614 03:41:33 ublk -- common/autotest_common.sh@954 -- # kill -0 71455 00:14:41.614 03:41:33 ublk -- common/autotest_common.sh@955 -- # uname 00:14:41.614 03:41:33 ublk -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:41.614 03:41:33 ublk -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71455 00:14:41.614 03:41:33 ublk -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:41.614 03:41:33 ublk -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:41.614 killing process with pid 71455 00:14:41.614 03:41:33 ublk -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71455' 00:14:41.614 03:41:33 ublk -- common/autotest_common.sh@969 -- # kill 71455 00:14:41.614 03:41:33 ublk -- common/autotest_common.sh@974 -- # wait 71455 00:14:42.181 [2024-10-01 03:41:34.505961] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:14:42.181 [2024-10-01 03:41:34.506033] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:14:42.748 00:14:42.748 real 0m25.840s 00:14:42.748 user 0m34.964s 00:14:42.748 sys 0m11.247s 00:14:42.748 03:41:35 ublk -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:42.748 03:41:35 ublk -- common/autotest_common.sh@10 -- # set +x 00:14:42.748 ************************************ 00:14:42.748 END TEST ublk 00:14:42.748 ************************************ 00:14:43.006 03:41:35 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:43.006 03:41:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:14:43.006 03:41:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:43.006 03:41:35 -- common/autotest_common.sh@10 -- # set +x 00:14:43.006 ************************************ 00:14:43.006 START TEST ublk_recovery 00:14:43.006 ************************************ 00:14:43.006 03:41:35 ublk_recovery -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:14:43.006 * Looking for test storage... 00:14:43.006 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:14:43.006 03:41:35 ublk_recovery -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:14:43.006 03:41:35 ublk_recovery -- common/autotest_common.sh@1681 -- # lcov --version 00:14:43.006 03:41:35 ublk_recovery -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:14:43.006 03:41:35 ublk_recovery -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:14:43.006 03:41:35 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:43.006 03:41:35 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:43.006 03:41:35 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:43.006 03:41:35 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:43.006 03:41:35 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:43.006 03:41:35 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:43.006 03:41:35 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:43.006 03:41:35 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:43.006 03:41:35 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:43.007 03:41:35 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:14:43.007 03:41:35 ublk_recovery -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:43.007 03:41:35 ublk_recovery -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:14:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.007 --rc genhtml_branch_coverage=1 00:14:43.007 --rc genhtml_function_coverage=1 00:14:43.007 --rc genhtml_legend=1 00:14:43.007 --rc geninfo_all_blocks=1 00:14:43.007 --rc geninfo_unexecuted_blocks=1 00:14:43.007 00:14:43.007 ' 00:14:43.007 03:41:35 ublk_recovery -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:14:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.007 --rc genhtml_branch_coverage=1 00:14:43.007 --rc genhtml_function_coverage=1 00:14:43.007 --rc genhtml_legend=1 00:14:43.007 --rc geninfo_all_blocks=1 00:14:43.007 --rc geninfo_unexecuted_blocks=1 00:14:43.007 00:14:43.007 ' 00:14:43.007 03:41:35 ublk_recovery -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:14:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.007 --rc genhtml_branch_coverage=1 00:14:43.007 --rc genhtml_function_coverage=1 00:14:43.007 --rc genhtml_legend=1 00:14:43.007 --rc geninfo_all_blocks=1 00:14:43.007 --rc geninfo_unexecuted_blocks=1 00:14:43.007 00:14:43.007 ' 00:14:43.007 03:41:35 ublk_recovery -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:14:43.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:43.007 --rc genhtml_branch_coverage=1 00:14:43.007 --rc genhtml_function_coverage=1 00:14:43.007 --rc genhtml_legend=1 00:14:43.007 --rc geninfo_all_blocks=1 00:14:43.007 --rc geninfo_unexecuted_blocks=1 00:14:43.007 00:14:43.007 ' 00:14:43.007 03:41:35 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:14:43.007 03:41:35 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:14:43.007 03:41:35 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:14:43.007 03:41:35 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:14:43.007 03:41:35 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:14:43.007 03:41:35 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:14:43.007 03:41:35 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:14:43.007 03:41:35 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:14:43.007 03:41:35 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:14:43.007 03:41:35 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:14:43.007 03:41:35 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=71859 00:14:43.007 03:41:35 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:43.007 03:41:35 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:43.007 03:41:35 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 71859 00:14:43.007 03:41:35 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 71859 ']' 00:14:43.007 03:41:35 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:43.007 03:41:35 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:43.007 03:41:35 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:43.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:43.007 03:41:35 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:43.007 03:41:35 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:43.007 [2024-10-01 03:41:35.529163] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:43.007 [2024-10-01 03:41:35.529312] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71859 ] 00:14:43.265 [2024-10-01 03:41:35.677494] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:43.524 [2024-10-01 03:41:35.874681] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:43.524 [2024-10-01 03:41:35.874750] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.091 03:41:36 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:44.091 03:41:36 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:44.091 03:41:36 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:14:44.091 03:41:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.091 03:41:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.091 [2024-10-01 03:41:36.537026] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:44.091 [2024-10-01 03:41:36.538602] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:44.091 03:41:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.091 03:41:36 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:44.091 03:41:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.091 03:41:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.350 malloc0 00:14:44.350 03:41:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.350 03:41:36 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:14:44.350 03:41:36 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.350 03:41:36 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:44.350 [2024-10-01 03:41:36.649156] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:14:44.350 [2024-10-01 03:41:36.649257] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:14:44.350 [2024-10-01 03:41:36.649268] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:44.350 [2024-10-01 03:41:36.649276] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:14:44.350 [2024-10-01 03:41:36.658132] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:14:44.350 [2024-10-01 03:41:36.658152] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:14:44.350 [2024-10-01 03:41:36.665032] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:14:44.350 [2024-10-01 03:41:36.665173] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:14:44.350 [2024-10-01 03:41:36.682035] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:14:44.350 1 00:14:44.350 03:41:36 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.350 03:41:36 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:14:45.285 03:41:37 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=71894 00:14:45.285 03:41:37 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:14:45.285 03:41:37 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:14:45.285 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:14:45.285 fio-3.35 00:14:45.285 Starting 1 process 00:14:50.544 03:41:42 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 71859 00:14:50.544 03:41:42 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:14:55.883 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 71859 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:14:55.883 03:41:47 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=72006 00:14:55.883 03:41:47 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:14:55.883 03:41:47 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:14:55.883 03:41:47 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 72006 00:14:55.883 03:41:47 ublk_recovery -- common/autotest_common.sh@831 -- # '[' -z 72006 ']' 00:14:55.883 03:41:47 ublk_recovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:55.883 03:41:47 ublk_recovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:55.883 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:55.883 03:41:47 ublk_recovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:55.883 03:41:47 ublk_recovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:55.883 03:41:47 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:55.883 [2024-10-01 03:41:47.780720] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:14:55.883 [2024-10-01 03:41:47.780854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72006 ] 00:14:55.883 [2024-10-01 03:41:47.932312] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:55.883 [2024-10-01 03:41:48.131915] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:14:55.883 [2024-10-01 03:41:48.131987] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.450 03:41:48 ublk_recovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:56.450 03:41:48 ublk_recovery -- common/autotest_common.sh@864 -- # return 0 00:14:56.450 03:41:48 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:14:56.450 03:41:48 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.450 03:41:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.450 [2024-10-01 03:41:48.773027] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:14:56.450 [2024-10-01 03:41:48.774614] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:14:56.450 03:41:48 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.450 03:41:48 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:14:56.450 03:41:48 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.450 03:41:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.450 malloc0 00:14:56.450 03:41:48 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.450 03:41:48 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:14:56.450 03:41:48 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.450 03:41:48 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:14:56.450 [2024-10-01 03:41:48.884167] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:14:56.450 [2024-10-01 03:41:48.884212] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:14:56.450 [2024-10-01 03:41:48.884223] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:14:56.450 1 00:14:56.450 03:41:48 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.450 03:41:48 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 71894 00:14:56.450 [2024-10-01 03:41:48.893025] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:14:56.450 [2024-10-01 03:41:48.893055] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:14:56.450 [2024-10-01 03:41:48.893064] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:14:56.450 [2024-10-01 03:41:48.893144] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:14:56.450 [2024-10-01 03:41:48.901029] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:14:56.450 [2024-10-01 03:41:48.903998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:14:56.450 [2024-10-01 03:41:48.909026] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:14:56.450 [2024-10-01 03:41:48.909048] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:15:52.713 00:15:52.713 fio_test: (groupid=0, jobs=1): err= 0: pid=71897: Tue Oct 1 03:42:37 2024 00:15:52.713 read: IOPS=25.4k, BW=99.3MiB/s (104MB/s)(5958MiB/60003msec) 00:15:52.713 slat (nsec): min=1080, max=336926, avg=5510.77, stdev=1985.57 00:15:52.713 clat (usec): min=888, max=6223.1k, avg=2506.60, stdev=42734.77 00:15:52.713 lat (usec): min=893, max=6223.1k, avg=2512.11, stdev=42734.77 00:15:52.713 clat percentiles (usec): 00:15:52.713 | 1.00th=[ 1778], 5.00th=[ 1909], 10.00th=[ 1958], 20.00th=[ 1991], 00:15:52.713 | 30.00th=[ 2024], 40.00th=[ 2040], 50.00th=[ 2057], 60.00th=[ 2073], 00:15:52.713 | 70.00th=[ 2089], 80.00th=[ 2114], 90.00th=[ 2245], 95.00th=[ 3425], 00:15:52.713 | 99.00th=[ 5604], 99.50th=[ 6063], 99.90th=[ 8094], 99.95th=[11076], 00:15:52.713 | 99.99th=[13042] 00:15:52.713 bw ( KiB/s): min=45672, max=120216, per=100.00%, avg=113041.93, stdev=14286.12, samples=107 00:15:52.713 iops : min=11418, max=30054, avg=28260.48, stdev=3571.53, samples=107 00:15:52.713 write: IOPS=25.4k, BW=99.2MiB/s (104MB/s)(5951MiB/60003msec); 0 zone resets 00:15:52.713 slat (nsec): min=1101, max=288668, avg=5721.63, stdev=1991.46 00:15:52.713 clat (usec): min=705, max=6223.3k, avg=2519.46, stdev=37714.27 00:15:52.713 lat (usec): min=711, max=6223.3k, avg=2525.19, stdev=37714.26 00:15:52.713 clat percentiles (usec): 00:15:52.713 | 1.00th=[ 1795], 5.00th=[ 1975], 10.00th=[ 2040], 20.00th=[ 2089], 00:15:52.713 | 30.00th=[ 2114], 40.00th=[ 2114], 50.00th=[ 2147], 60.00th=[ 2180], 00:15:52.713 | 70.00th=[ 2180], 80.00th=[ 2212], 90.00th=[ 2311], 95.00th=[ 3359], 00:15:52.713 | 99.00th=[ 5735], 99.50th=[ 6128], 99.90th=[ 8225], 99.95th=[ 9896], 00:15:52.713 | 99.99th=[13173] 00:15:52.713 bw ( KiB/s): min=46216, max=119984, per=100.00%, avg=112884.99, stdev=14115.30, samples=107 00:15:52.713 iops : min=11554, max=29996, avg=28221.24, stdev=3528.83, samples=107 00:15:52.713 lat (usec) : 750=0.01%, 1000=0.01% 00:15:52.713 lat (msec) : 2=14.87%, 4=81.65%, 10=3.43%, 20=0.05%, >=2000=0.01% 00:15:52.713 cpu : usr=5.52%, sys=29.22%, ctx=103424, majf=0, minf=13 00:15:52.713 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:15:52.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:52.713 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:52.713 issued rwts: total=1525375,1523428,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:52.713 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:52.713 00:15:52.713 Run status group 0 (all jobs): 00:15:52.713 READ: bw=99.3MiB/s (104MB/s), 99.3MiB/s-99.3MiB/s (104MB/s-104MB/s), io=5958MiB (6248MB), run=60003-60003msec 00:15:52.713 WRITE: bw=99.2MiB/s (104MB/s), 99.2MiB/s-99.2MiB/s (104MB/s-104MB/s), io=5951MiB (6240MB), run=60003-60003msec 00:15:52.713 00:15:52.713 Disk stats (read/write): 00:15:52.713 ublkb1: ios=1522222/1520357, merge=0/0, ticks=3714751/3604794, in_queue=7319545, util=99.90% 00:15:52.713 03:42:37 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:15:52.713 03:42:37 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.713 03:42:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.713 [2024-10-01 03:42:37.943294] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:52.713 [2024-10-01 03:42:37.981041] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:52.713 [2024-10-01 03:42:37.981190] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:52.713 [2024-10-01 03:42:37.990038] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:52.713 [2024-10-01 03:42:37.990135] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:52.713 [2024-10-01 03:42:37.990144] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:52.713 03:42:37 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.713 03:42:37 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:15:52.713 03:42:37 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.713 03:42:37 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.713 [2024-10-01 03:42:38.005097] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:52.713 [2024-10-01 03:42:38.006967] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:52.713 [2024-10-01 03:42:38.006996] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:52.713 03:42:38 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.713 03:42:38 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:15:52.713 03:42:38 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:15:52.713 03:42:38 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 72006 00:15:52.713 03:42:38 ublk_recovery -- common/autotest_common.sh@950 -- # '[' -z 72006 ']' 00:15:52.713 03:42:38 ublk_recovery -- common/autotest_common.sh@954 -- # kill -0 72006 00:15:52.713 03:42:38 ublk_recovery -- common/autotest_common.sh@955 -- # uname 00:15:52.713 03:42:38 ublk_recovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.713 03:42:38 ublk_recovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72006 00:15:52.713 03:42:38 ublk_recovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.713 killing process with pid 72006 00:15:52.713 03:42:38 ublk_recovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.713 03:42:38 ublk_recovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72006' 00:15:52.713 03:42:38 ublk_recovery -- common/autotest_common.sh@969 -- # kill 72006 00:15:52.713 03:42:38 ublk_recovery -- common/autotest_common.sh@974 -- # wait 72006 00:15:52.713 [2024-10-01 03:42:39.214804] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:52.713 [2024-10-01 03:42:39.214861] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:52.714 00:15:52.714 real 1m5.031s 00:15:52.714 user 1m41.570s 00:15:52.714 sys 0m38.230s 00:15:52.714 03:42:40 ublk_recovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:52.714 ************************************ 00:15:52.714 END TEST ublk_recovery 00:15:52.714 ************************************ 00:15:52.714 03:42:40 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:52.714 03:42:40 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:15:52.714 03:42:40 -- spdk/autotest.sh@256 -- # timing_exit lib 00:15:52.714 03:42:40 -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:52.714 03:42:40 -- common/autotest_common.sh@10 -- # set +x 00:15:52.714 03:42:40 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:15:52.714 03:42:40 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:15:52.714 03:42:40 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:15:52.714 03:42:40 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:15:52.714 03:42:40 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:15:52.714 03:42:40 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:15:52.714 03:42:40 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:15:52.714 03:42:40 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:15:52.714 03:42:40 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:15:52.714 03:42:40 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:15:52.714 03:42:40 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:52.714 03:42:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:15:52.714 03:42:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.714 03:42:40 -- common/autotest_common.sh@10 -- # set +x 00:15:52.714 ************************************ 00:15:52.714 START TEST ftl 00:15:52.714 ************************************ 00:15:52.714 03:42:40 ftl -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:52.714 * Looking for test storage... 00:15:52.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.714 03:42:40 ftl -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:52.714 03:42:40 ftl -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:52.714 03:42:40 ftl -- common/autotest_common.sh@1681 -- # lcov --version 00:15:52.714 03:42:40 ftl -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:52.714 03:42:40 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.714 03:42:40 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.714 03:42:40 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.714 03:42:40 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.714 03:42:40 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.714 03:42:40 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.714 03:42:40 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.714 03:42:40 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.714 03:42:40 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.714 03:42:40 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.714 03:42:40 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.714 03:42:40 ftl -- scripts/common.sh@344 -- # case "$op" in 00:15:52.714 03:42:40 ftl -- scripts/common.sh@345 -- # : 1 00:15:52.714 03:42:40 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.714 03:42:40 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.714 03:42:40 ftl -- scripts/common.sh@365 -- # decimal 1 00:15:52.714 03:42:40 ftl -- scripts/common.sh@353 -- # local d=1 00:15:52.714 03:42:40 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.714 03:42:40 ftl -- scripts/common.sh@355 -- # echo 1 00:15:52.714 03:42:40 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.714 03:42:40 ftl -- scripts/common.sh@366 -- # decimal 2 00:15:52.714 03:42:40 ftl -- scripts/common.sh@353 -- # local d=2 00:15:52.714 03:42:40 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.714 03:42:40 ftl -- scripts/common.sh@355 -- # echo 2 00:15:52.714 03:42:40 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.714 03:42:40 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.714 03:42:40 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.714 03:42:40 ftl -- scripts/common.sh@368 -- # return 0 00:15:52.714 03:42:40 ftl -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.714 03:42:40 ftl -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:52.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.714 --rc genhtml_branch_coverage=1 00:15:52.714 --rc genhtml_function_coverage=1 00:15:52.714 --rc genhtml_legend=1 00:15:52.714 --rc geninfo_all_blocks=1 00:15:52.714 --rc geninfo_unexecuted_blocks=1 00:15:52.714 00:15:52.714 ' 00:15:52.714 03:42:40 ftl -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:52.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.714 --rc genhtml_branch_coverage=1 00:15:52.714 --rc genhtml_function_coverage=1 00:15:52.714 --rc genhtml_legend=1 00:15:52.714 --rc geninfo_all_blocks=1 00:15:52.714 --rc geninfo_unexecuted_blocks=1 00:15:52.714 00:15:52.714 ' 00:15:52.714 03:42:40 ftl -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:52.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.714 --rc genhtml_branch_coverage=1 00:15:52.714 --rc genhtml_function_coverage=1 00:15:52.714 --rc genhtml_legend=1 00:15:52.714 --rc geninfo_all_blocks=1 00:15:52.714 --rc geninfo_unexecuted_blocks=1 00:15:52.714 00:15:52.714 ' 00:15:52.714 03:42:40 ftl -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:52.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.714 --rc genhtml_branch_coverage=1 00:15:52.714 --rc genhtml_function_coverage=1 00:15:52.714 --rc genhtml_legend=1 00:15:52.714 --rc geninfo_all_blocks=1 00:15:52.714 --rc geninfo_unexecuted_blocks=1 00:15:52.714 00:15:52.714 ' 00:15:52.714 03:42:40 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:52.714 03:42:40 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:15:52.714 03:42:40 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.714 03:42:40 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.714 03:42:40 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:52.714 03:42:40 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:52.714 03:42:40 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.714 03:42:40 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:52.714 03:42:40 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:52.714 03:42:40 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.714 03:42:40 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.714 03:42:40 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:52.714 03:42:40 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:52.714 03:42:40 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:52.714 03:42:40 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:52.714 03:42:40 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:52.714 03:42:40 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:52.714 03:42:40 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.714 03:42:40 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.714 03:42:40 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:52.714 03:42:40 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:52.714 03:42:40 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:52.714 03:42:40 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:52.714 03:42:40 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:52.714 03:42:40 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:52.714 03:42:40 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:52.714 03:42:40 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:52.714 03:42:40 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:52.714 03:42:40 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:52.714 03:42:40 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.715 03:42:40 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:15:52.715 03:42:40 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:15:52.715 03:42:40 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:15:52.715 03:42:40 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:15:52.715 03:42:40 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:52.715 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:52.715 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:52.715 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:52.715 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:52.715 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:15:52.715 03:42:41 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=72816 00:15:52.715 03:42:41 ftl -- ftl/ftl.sh@38 -- # waitforlisten 72816 00:15:52.715 03:42:41 ftl -- common/autotest_common.sh@831 -- # '[' -z 72816 ']' 00:15:52.715 03:42:41 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:15:52.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.715 03:42:41 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.715 03:42:41 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.715 03:42:41 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.715 03:42:41 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.715 03:42:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:52.715 [2024-10-01 03:42:41.156206] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:52.715 [2024-10-01 03:42:41.156343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72816 ] 00:15:52.715 [2024-10-01 03:42:41.302795] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:52.715 [2024-10-01 03:42:41.485225] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:52.715 03:42:41 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:52.715 03:42:41 ftl -- common/autotest_common.sh@864 -- # return 0 00:15:52.715 03:42:41 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:15:52.715 03:42:42 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:15:52.715 03:42:42 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:15:52.715 03:42:42 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@50 -- # break 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@63 -- # break 00:15:52.715 03:42:43 ftl -- ftl/ftl.sh@66 -- # killprocess 72816 00:15:52.715 03:42:43 ftl -- common/autotest_common.sh@950 -- # '[' -z 72816 ']' 00:15:52.715 03:42:43 ftl -- common/autotest_common.sh@954 -- # kill -0 72816 00:15:52.715 03:42:43 ftl -- common/autotest_common.sh@955 -- # uname 00:15:52.715 03:42:43 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:52.715 03:42:43 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72816 00:15:52.715 03:42:43 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:52.715 03:42:43 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:52.715 killing process with pid 72816 00:15:52.715 03:42:43 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72816' 00:15:52.715 03:42:43 ftl -- common/autotest_common.sh@969 -- # kill 72816 00:15:52.715 03:42:43 ftl -- common/autotest_common.sh@974 -- # wait 72816 00:15:52.715 03:42:45 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:15:52.715 03:42:45 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:52.715 03:42:45 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:52.715 03:42:45 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:52.715 03:42:45 ftl -- common/autotest_common.sh@10 -- # set +x 00:15:52.715 ************************************ 00:15:52.715 START TEST ftl_fio_basic 00:15:52.715 ************************************ 00:15:52.715 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:15:52.715 * Looking for test storage... 00:15:52.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.715 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:15:52.715 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:15:52.715 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lcov --version 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:15:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.979 --rc genhtml_branch_coverage=1 00:15:52.979 --rc genhtml_function_coverage=1 00:15:52.979 --rc genhtml_legend=1 00:15:52.979 --rc geninfo_all_blocks=1 00:15:52.979 --rc geninfo_unexecuted_blocks=1 00:15:52.979 00:15:52.979 ' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:15:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.979 --rc genhtml_branch_coverage=1 00:15:52.979 --rc genhtml_function_coverage=1 00:15:52.979 --rc genhtml_legend=1 00:15:52.979 --rc geninfo_all_blocks=1 00:15:52.979 --rc geninfo_unexecuted_blocks=1 00:15:52.979 00:15:52.979 ' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:15:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.979 --rc genhtml_branch_coverage=1 00:15:52.979 --rc genhtml_function_coverage=1 00:15:52.979 --rc genhtml_legend=1 00:15:52.979 --rc geninfo_all_blocks=1 00:15:52.979 --rc geninfo_unexecuted_blocks=1 00:15:52.979 00:15:52.979 ' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:15:52.979 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:52.979 --rc genhtml_branch_coverage=1 00:15:52.979 --rc genhtml_function_coverage=1 00:15:52.979 --rc genhtml_legend=1 00:15:52.979 --rc geninfo_all_blocks=1 00:15:52.979 --rc geninfo_unexecuted_blocks=1 00:15:52.979 00:15:52.979 ' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:15:52.979 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=72948 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 72948 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@831 -- # '[' -z 72948 ']' 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:52.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:15:52.980 03:42:45 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:15:52.980 [2024-10-01 03:42:45.393569] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:15:52.980 [2024-10-01 03:42:45.393687] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72948 ] 00:15:53.241 [2024-10-01 03:42:45.538973] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:53.241 [2024-10-01 03:42:45.726896] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:15:53.241 [2024-10-01 03:42:45.727164] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:15:53.242 [2024-10-01 03:42:45.727249] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:15:53.811 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:53.811 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # return 0 00:15:53.811 03:42:46 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:15:53.811 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:15:53.811 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:15:53.811 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:15:53.811 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:15:53.812 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:15:54.073 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:15:54.073 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:15:54.073 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:15:54.073 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:15:54.073 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:54.073 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:54.073 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:54.073 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:54.333 { 00:15:54.333 "name": "nvme0n1", 00:15:54.333 "aliases": [ 00:15:54.333 "60a59e7e-dc25-4727-acc4-532447218517" 00:15:54.333 ], 00:15:54.333 "product_name": "NVMe disk", 00:15:54.333 "block_size": 4096, 00:15:54.333 "num_blocks": 1310720, 00:15:54.333 "uuid": "60a59e7e-dc25-4727-acc4-532447218517", 00:15:54.333 "numa_id": -1, 00:15:54.333 "assigned_rate_limits": { 00:15:54.333 "rw_ios_per_sec": 0, 00:15:54.333 "rw_mbytes_per_sec": 0, 00:15:54.333 "r_mbytes_per_sec": 0, 00:15:54.333 "w_mbytes_per_sec": 0 00:15:54.333 }, 00:15:54.333 "claimed": false, 00:15:54.333 "zoned": false, 00:15:54.333 "supported_io_types": { 00:15:54.333 "read": true, 00:15:54.333 "write": true, 00:15:54.333 "unmap": true, 00:15:54.333 "flush": true, 00:15:54.333 "reset": true, 00:15:54.333 "nvme_admin": true, 00:15:54.333 "nvme_io": true, 00:15:54.333 "nvme_io_md": false, 00:15:54.333 "write_zeroes": true, 00:15:54.333 "zcopy": false, 00:15:54.333 "get_zone_info": false, 00:15:54.333 "zone_management": false, 00:15:54.333 "zone_append": false, 00:15:54.333 "compare": true, 00:15:54.333 "compare_and_write": false, 00:15:54.333 "abort": true, 00:15:54.333 "seek_hole": false, 00:15:54.333 "seek_data": false, 00:15:54.333 "copy": true, 00:15:54.333 "nvme_iov_md": false 00:15:54.333 }, 00:15:54.333 "driver_specific": { 00:15:54.333 "nvme": [ 00:15:54.333 { 00:15:54.333 "pci_address": "0000:00:11.0", 00:15:54.333 "trid": { 00:15:54.333 "trtype": "PCIe", 00:15:54.333 "traddr": "0000:00:11.0" 00:15:54.333 }, 00:15:54.333 "ctrlr_data": { 00:15:54.333 "cntlid": 0, 00:15:54.333 "vendor_id": "0x1b36", 00:15:54.333 "model_number": "QEMU NVMe Ctrl", 00:15:54.333 "serial_number": "12341", 00:15:54.333 "firmware_revision": "8.0.0", 00:15:54.333 "subnqn": "nqn.2019-08.org.qemu:12341", 00:15:54.333 "oacs": { 00:15:54.333 "security": 0, 00:15:54.333 "format": 1, 00:15:54.333 "firmware": 0, 00:15:54.333 "ns_manage": 1 00:15:54.333 }, 00:15:54.333 "multi_ctrlr": false, 00:15:54.333 "ana_reporting": false 00:15:54.333 }, 00:15:54.333 "vs": { 00:15:54.333 "nvme_version": "1.4" 00:15:54.333 }, 00:15:54.333 "ns_data": { 00:15:54.333 "id": 1, 00:15:54.333 "can_share": false 00:15:54.333 } 00:15:54.333 } 00:15:54.333 ], 00:15:54.333 "mp_policy": "active_passive" 00:15:54.333 } 00:15:54.333 } 00:15:54.333 ]' 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=1310720 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 5120 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:15:54.333 03:42:46 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:54.593 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:15:54.593 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:15:54.853 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=a32c9bb9-d754-4627-abf1-a5295b200573 00:15:54.853 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a32c9bb9-d754-4627-abf1-a5295b200573 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:55.112 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:55.373 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:55.373 { 00:15:55.373 "name": "65082f1d-6171-4f02-be6b-072c51e8b56e", 00:15:55.373 "aliases": [ 00:15:55.373 "lvs/nvme0n1p0" 00:15:55.373 ], 00:15:55.373 "product_name": "Logical Volume", 00:15:55.373 "block_size": 4096, 00:15:55.373 "num_blocks": 26476544, 00:15:55.373 "uuid": "65082f1d-6171-4f02-be6b-072c51e8b56e", 00:15:55.373 "assigned_rate_limits": { 00:15:55.373 "rw_ios_per_sec": 0, 00:15:55.373 "rw_mbytes_per_sec": 0, 00:15:55.373 "r_mbytes_per_sec": 0, 00:15:55.373 "w_mbytes_per_sec": 0 00:15:55.373 }, 00:15:55.373 "claimed": false, 00:15:55.373 "zoned": false, 00:15:55.373 "supported_io_types": { 00:15:55.373 "read": true, 00:15:55.373 "write": true, 00:15:55.373 "unmap": true, 00:15:55.373 "flush": false, 00:15:55.373 "reset": true, 00:15:55.373 "nvme_admin": false, 00:15:55.373 "nvme_io": false, 00:15:55.373 "nvme_io_md": false, 00:15:55.373 "write_zeroes": true, 00:15:55.373 "zcopy": false, 00:15:55.373 "get_zone_info": false, 00:15:55.373 "zone_management": false, 00:15:55.373 "zone_append": false, 00:15:55.373 "compare": false, 00:15:55.373 "compare_and_write": false, 00:15:55.373 "abort": false, 00:15:55.373 "seek_hole": true, 00:15:55.373 "seek_data": true, 00:15:55.373 "copy": false, 00:15:55.373 "nvme_iov_md": false 00:15:55.373 }, 00:15:55.373 "driver_specific": { 00:15:55.373 "lvol": { 00:15:55.373 "lvol_store_uuid": "a32c9bb9-d754-4627-abf1-a5295b200573", 00:15:55.373 "base_bdev": "nvme0n1", 00:15:55.373 "thin_provision": true, 00:15:55.373 "num_allocated_clusters": 0, 00:15:55.373 "snapshot": false, 00:15:55.373 "clone": false, 00:15:55.373 "esnap_clone": false 00:15:55.373 } 00:15:55.373 } 00:15:55.373 } 00:15:55.373 ]' 00:15:55.373 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:55.373 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:55.373 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:55.373 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:55.373 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:55.373 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:55.373 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:15:55.373 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:15:55.373 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:15:55.633 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:15:55.633 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:15:55.633 03:42:47 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:55.633 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:55.633 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:55.633 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:55.633 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:55.633 03:42:47 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:55.633 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:55.633 { 00:15:55.633 "name": "65082f1d-6171-4f02-be6b-072c51e8b56e", 00:15:55.633 "aliases": [ 00:15:55.633 "lvs/nvme0n1p0" 00:15:55.633 ], 00:15:55.633 "product_name": "Logical Volume", 00:15:55.633 "block_size": 4096, 00:15:55.633 "num_blocks": 26476544, 00:15:55.633 "uuid": "65082f1d-6171-4f02-be6b-072c51e8b56e", 00:15:55.633 "assigned_rate_limits": { 00:15:55.633 "rw_ios_per_sec": 0, 00:15:55.633 "rw_mbytes_per_sec": 0, 00:15:55.633 "r_mbytes_per_sec": 0, 00:15:55.633 "w_mbytes_per_sec": 0 00:15:55.633 }, 00:15:55.633 "claimed": false, 00:15:55.633 "zoned": false, 00:15:55.633 "supported_io_types": { 00:15:55.633 "read": true, 00:15:55.633 "write": true, 00:15:55.633 "unmap": true, 00:15:55.633 "flush": false, 00:15:55.633 "reset": true, 00:15:55.633 "nvme_admin": false, 00:15:55.633 "nvme_io": false, 00:15:55.633 "nvme_io_md": false, 00:15:55.633 "write_zeroes": true, 00:15:55.633 "zcopy": false, 00:15:55.633 "get_zone_info": false, 00:15:55.633 "zone_management": false, 00:15:55.633 "zone_append": false, 00:15:55.633 "compare": false, 00:15:55.633 "compare_and_write": false, 00:15:55.633 "abort": false, 00:15:55.633 "seek_hole": true, 00:15:55.633 "seek_data": true, 00:15:55.634 "copy": false, 00:15:55.634 "nvme_iov_md": false 00:15:55.634 }, 00:15:55.634 "driver_specific": { 00:15:55.634 "lvol": { 00:15:55.634 "lvol_store_uuid": "a32c9bb9-d754-4627-abf1-a5295b200573", 00:15:55.634 "base_bdev": "nvme0n1", 00:15:55.634 "thin_provision": true, 00:15:55.634 "num_allocated_clusters": 0, 00:15:55.634 "snapshot": false, 00:15:55.634 "clone": false, 00:15:55.634 "esnap_clone": false 00:15:55.634 } 00:15:55.634 } 00:15:55.634 } 00:15:55.634 ]' 00:15:55.634 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:55.892 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:55.892 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:55.892 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:55.892 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:55.892 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:55.892 03:42:48 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:15:55.892 03:42:48 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:15:56.151 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1378 -- # local bdev_name=65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1379 -- # local bdev_info 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bs 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local nb 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 65082f1d-6171-4f02-be6b-072c51e8b56e 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:15:56.151 { 00:15:56.151 "name": "65082f1d-6171-4f02-be6b-072c51e8b56e", 00:15:56.151 "aliases": [ 00:15:56.151 "lvs/nvme0n1p0" 00:15:56.151 ], 00:15:56.151 "product_name": "Logical Volume", 00:15:56.151 "block_size": 4096, 00:15:56.151 "num_blocks": 26476544, 00:15:56.151 "uuid": "65082f1d-6171-4f02-be6b-072c51e8b56e", 00:15:56.151 "assigned_rate_limits": { 00:15:56.151 "rw_ios_per_sec": 0, 00:15:56.151 "rw_mbytes_per_sec": 0, 00:15:56.151 "r_mbytes_per_sec": 0, 00:15:56.151 "w_mbytes_per_sec": 0 00:15:56.151 }, 00:15:56.151 "claimed": false, 00:15:56.151 "zoned": false, 00:15:56.151 "supported_io_types": { 00:15:56.151 "read": true, 00:15:56.151 "write": true, 00:15:56.151 "unmap": true, 00:15:56.151 "flush": false, 00:15:56.151 "reset": true, 00:15:56.151 "nvme_admin": false, 00:15:56.151 "nvme_io": false, 00:15:56.151 "nvme_io_md": false, 00:15:56.151 "write_zeroes": true, 00:15:56.151 "zcopy": false, 00:15:56.151 "get_zone_info": false, 00:15:56.151 "zone_management": false, 00:15:56.151 "zone_append": false, 00:15:56.151 "compare": false, 00:15:56.151 "compare_and_write": false, 00:15:56.151 "abort": false, 00:15:56.151 "seek_hole": true, 00:15:56.151 "seek_data": true, 00:15:56.151 "copy": false, 00:15:56.151 "nvme_iov_md": false 00:15:56.151 }, 00:15:56.151 "driver_specific": { 00:15:56.151 "lvol": { 00:15:56.151 "lvol_store_uuid": "a32c9bb9-d754-4627-abf1-a5295b200573", 00:15:56.151 "base_bdev": "nvme0n1", 00:15:56.151 "thin_provision": true, 00:15:56.151 "num_allocated_clusters": 0, 00:15:56.151 "snapshot": false, 00:15:56.151 "clone": false, 00:15:56.151 "esnap_clone": false 00:15:56.151 } 00:15:56.151 } 00:15:56.151 } 00:15:56.151 ]' 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # bs=4096 00:15:56.151 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:15:56.411 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # nb=26476544 00:15:56.411 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:15:56.411 03:42:48 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # echo 103424 00:15:56.411 03:42:48 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:15:56.411 03:42:48 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:15:56.411 03:42:48 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 65082f1d-6171-4f02-be6b-072c51e8b56e -c nvc0n1p0 --l2p_dram_limit 60 00:15:56.411 [2024-10-01 03:42:48.900910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.411 [2024-10-01 03:42:48.900972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:15:56.411 [2024-10-01 03:42:48.900989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:15:56.411 [2024-10-01 03:42:48.900997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.411 [2024-10-01 03:42:48.901083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.411 [2024-10-01 03:42:48.901092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:15:56.411 [2024-10-01 03:42:48.901102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:15:56.411 [2024-10-01 03:42:48.901112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.411 [2024-10-01 03:42:48.901153] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:15:56.411 [2024-10-01 03:42:48.901798] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:15:56.411 [2024-10-01 03:42:48.901824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.411 [2024-10-01 03:42:48.901831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:15:56.411 [2024-10-01 03:42:48.901840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:15:56.411 [2024-10-01 03:42:48.901847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.411 [2024-10-01 03:42:48.902064] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 65f72475-10d4-4cde-a697-8a55dab41d8a 00:15:56.411 [2024-10-01 03:42:48.903483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.411 [2024-10-01 03:42:48.903516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:15:56.411 [2024-10-01 03:42:48.903525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:15:56.411 [2024-10-01 03:42:48.903532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.411 [2024-10-01 03:42:48.910840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.411 [2024-10-01 03:42:48.910869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:15:56.411 [2024-10-01 03:42:48.910878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.233 ms 00:15:56.411 [2024-10-01 03:42:48.910886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.411 [2024-10-01 03:42:48.910984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.411 [2024-10-01 03:42:48.910995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:15:56.411 [2024-10-01 03:42:48.911013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:15:56.411 [2024-10-01 03:42:48.911025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.411 [2024-10-01 03:42:48.911083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.411 [2024-10-01 03:42:48.911099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:15:56.411 [2024-10-01 03:42:48.911106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:15:56.411 [2024-10-01 03:42:48.911114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.411 [2024-10-01 03:42:48.911148] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:15:56.411 [2024-10-01 03:42:48.914487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.411 [2024-10-01 03:42:48.914514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:15:56.411 [2024-10-01 03:42:48.914524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.342 ms 00:15:56.411 [2024-10-01 03:42:48.914531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.411 [2024-10-01 03:42:48.914565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.411 [2024-10-01 03:42:48.914573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:15:56.411 [2024-10-01 03:42:48.914582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:15:56.411 [2024-10-01 03:42:48.914590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.411 [2024-10-01 03:42:48.914623] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:15:56.411 [2024-10-01 03:42:48.914750] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:15:56.411 [2024-10-01 03:42:48.914766] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:15:56.411 [2024-10-01 03:42:48.914775] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:15:56.411 [2024-10-01 03:42:48.914787] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:15:56.411 [2024-10-01 03:42:48.914796] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:15:56.411 [2024-10-01 03:42:48.914805] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:15:56.411 [2024-10-01 03:42:48.914812] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:15:56.411 [2024-10-01 03:42:48.914820] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:15:56.411 [2024-10-01 03:42:48.914826] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:15:56.411 [2024-10-01 03:42:48.914834] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.411 [2024-10-01 03:42:48.914840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:15:56.411 [2024-10-01 03:42:48.914851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:15:56.411 [2024-10-01 03:42:48.914857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.411 [2024-10-01 03:42:48.914931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.411 [2024-10-01 03:42:48.914939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:15:56.411 [2024-10-01 03:42:48.914948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:15:56.411 [2024-10-01 03:42:48.914955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.411 [2024-10-01 03:42:48.915075] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:15:56.411 [2024-10-01 03:42:48.915090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:15:56.411 [2024-10-01 03:42:48.915099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:56.411 [2024-10-01 03:42:48.915105] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:56.411 [2024-10-01 03:42:48.915114] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:15:56.411 [2024-10-01 03:42:48.915121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:15:56.411 [2024-10-01 03:42:48.915128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:15:56.411 [2024-10-01 03:42:48.915134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:15:56.411 [2024-10-01 03:42:48.915142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:15:56.411 [2024-10-01 03:42:48.915147] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:56.411 [2024-10-01 03:42:48.915155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:15:56.412 [2024-10-01 03:42:48.915161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:15:56.412 [2024-10-01 03:42:48.915167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:15:56.412 [2024-10-01 03:42:48.915173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:15:56.412 [2024-10-01 03:42:48.915179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:15:56.412 [2024-10-01 03:42:48.915185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:56.412 [2024-10-01 03:42:48.915194] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:15:56.412 [2024-10-01 03:42:48.915199] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:15:56.412 [2024-10-01 03:42:48.915207] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:56.412 [2024-10-01 03:42:48.915214] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:15:56.412 [2024-10-01 03:42:48.915221] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:15:56.412 [2024-10-01 03:42:48.915226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:56.412 [2024-10-01 03:42:48.915238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:15:56.412 [2024-10-01 03:42:48.915243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:15:56.412 [2024-10-01 03:42:48.915251] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:56.412 [2024-10-01 03:42:48.915257] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:15:56.412 [2024-10-01 03:42:48.915263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:15:56.412 [2024-10-01 03:42:48.915268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:56.412 [2024-10-01 03:42:48.915276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:15:56.412 [2024-10-01 03:42:48.915281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:15:56.412 [2024-10-01 03:42:48.915289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:15:56.412 [2024-10-01 03:42:48.915295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:15:56.412 [2024-10-01 03:42:48.915303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:15:56.412 [2024-10-01 03:42:48.915308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:56.412 [2024-10-01 03:42:48.915315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:15:56.412 [2024-10-01 03:42:48.915319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:15:56.412 [2024-10-01 03:42:48.915326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:15:56.412 [2024-10-01 03:42:48.915332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:15:56.412 [2024-10-01 03:42:48.915339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:15:56.412 [2024-10-01 03:42:48.915357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:56.412 [2024-10-01 03:42:48.915364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:15:56.412 [2024-10-01 03:42:48.915370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:15:56.412 [2024-10-01 03:42:48.915376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:56.412 [2024-10-01 03:42:48.915381] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:15:56.412 [2024-10-01 03:42:48.915393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:15:56.412 [2024-10-01 03:42:48.915400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:15:56.412 [2024-10-01 03:42:48.915409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:15:56.412 [2024-10-01 03:42:48.915415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:15:56.412 [2024-10-01 03:42:48.915424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:15:56.412 [2024-10-01 03:42:48.915429] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:15:56.412 [2024-10-01 03:42:48.915436] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:15:56.412 [2024-10-01 03:42:48.915442] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:15:56.412 [2024-10-01 03:42:48.915449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:15:56.412 [2024-10-01 03:42:48.915459] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:15:56.412 [2024-10-01 03:42:48.915470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:56.412 [2024-10-01 03:42:48.915479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:15:56.412 [2024-10-01 03:42:48.915488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:15:56.412 [2024-10-01 03:42:48.915493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:15:56.412 [2024-10-01 03:42:48.915501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:15:56.412 [2024-10-01 03:42:48.915507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:15:56.412 [2024-10-01 03:42:48.915516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:15:56.412 [2024-10-01 03:42:48.915522] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:15:56.412 [2024-10-01 03:42:48.915529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:15:56.412 [2024-10-01 03:42:48.915535] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:15:56.412 [2024-10-01 03:42:48.915545] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:15:56.412 [2024-10-01 03:42:48.915551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:15:56.412 [2024-10-01 03:42:48.915558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:15:56.412 [2024-10-01 03:42:48.915563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:15:56.412 [2024-10-01 03:42:48.915570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:15:56.412 [2024-10-01 03:42:48.915577] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:15:56.412 [2024-10-01 03:42:48.915585] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:15:56.412 [2024-10-01 03:42:48.915591] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:15:56.412 [2024-10-01 03:42:48.915599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:15:56.412 [2024-10-01 03:42:48.915604] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:15:56.412 [2024-10-01 03:42:48.915612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:15:56.412 [2024-10-01 03:42:48.915618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:56.412 [2024-10-01 03:42:48.915626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:15:56.412 [2024-10-01 03:42:48.915633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:15:56.412 [2024-10-01 03:42:48.915641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:56.412 [2024-10-01 03:42:48.915696] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:15:56.412 [2024-10-01 03:42:48.915709] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:15:59.695 [2024-10-01 03:42:51.563845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.695 [2024-10-01 03:42:51.563930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:15:59.695 [2024-10-01 03:42:51.563946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2648.128 ms 00:15:59.695 [2024-10-01 03:42:51.563956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.695 [2024-10-01 03:42:51.600084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.695 [2024-10-01 03:42:51.600157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:15:59.695 [2024-10-01 03:42:51.600178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.894 ms 00:15:59.695 [2024-10-01 03:42:51.600196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.695 [2024-10-01 03:42:51.600407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.695 [2024-10-01 03:42:51.600439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:15:59.695 [2024-10-01 03:42:51.600452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:15:59.695 [2024-10-01 03:42:51.600468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.695 [2024-10-01 03:42:51.635787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.695 [2024-10-01 03:42:51.635841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:15:59.695 [2024-10-01 03:42:51.635854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.257 ms 00:15:59.695 [2024-10-01 03:42:51.635864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.695 [2024-10-01 03:42:51.635907] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.695 [2024-10-01 03:42:51.635920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:15:59.695 [2024-10-01 03:42:51.635931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:15:59.695 [2024-10-01 03:42:51.635941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.695 [2024-10-01 03:42:51.636417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.695 [2024-10-01 03:42:51.636444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:15:59.695 [2024-10-01 03:42:51.636454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:15:59.695 [2024-10-01 03:42:51.636464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.695 [2024-10-01 03:42:51.636605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.695 [2024-10-01 03:42:51.636622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:15:59.695 [2024-10-01 03:42:51.636630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.117 ms 00:15:59.695 [2024-10-01 03:42:51.636642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.695 [2024-10-01 03:42:51.652391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.695 [2024-10-01 03:42:51.652431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:15:59.695 [2024-10-01 03:42:51.652443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.724 ms 00:15:59.695 [2024-10-01 03:42:51.652454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.695 [2024-10-01 03:42:51.664662] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:15:59.695 [2024-10-01 03:42:51.681882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.695 [2024-10-01 03:42:51.681923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:15:59.695 [2024-10-01 03:42:51.681938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.321 ms 00:15:59.695 [2024-10-01 03:42:51.681947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.695 [2024-10-01 03:42:51.736055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.695 [2024-10-01 03:42:51.736106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:15:59.696 [2024-10-01 03:42:51.736122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.061 ms 00:15:59.696 [2024-10-01 03:42:51.736131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.696 [2024-10-01 03:42:51.736327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.696 [2024-10-01 03:42:51.736338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:15:59.696 [2024-10-01 03:42:51.736352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:15:59.696 [2024-10-01 03:42:51.736362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.696 [2024-10-01 03:42:51.759791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.696 [2024-10-01 03:42:51.759829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:15:59.696 [2024-10-01 03:42:51.759844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.361 ms 00:15:59.696 [2024-10-01 03:42:51.759852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.696 [2024-10-01 03:42:51.782457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.696 [2024-10-01 03:42:51.782491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:15:59.696 [2024-10-01 03:42:51.782504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.558 ms 00:15:59.696 [2024-10-01 03:42:51.782512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.696 [2024-10-01 03:42:51.783108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.696 [2024-10-01 03:42:51.783131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:15:59.696 [2024-10-01 03:42:51.783142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.553 ms 00:15:59.696 [2024-10-01 03:42:51.783151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.696 [2024-10-01 03:42:51.851972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.696 [2024-10-01 03:42:51.852024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:15:59.696 [2024-10-01 03:42:51.852043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.781 ms 00:15:59.696 [2024-10-01 03:42:51.852051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.696 [2024-10-01 03:42:51.876916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.696 [2024-10-01 03:42:51.876957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:15:59.696 [2024-10-01 03:42:51.876973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.770 ms 00:15:59.696 [2024-10-01 03:42:51.876983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.696 [2024-10-01 03:42:51.900216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.696 [2024-10-01 03:42:51.900257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:15:59.696 [2024-10-01 03:42:51.900270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.181 ms 00:15:59.696 [2024-10-01 03:42:51.900278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.696 [2024-10-01 03:42:51.923542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.696 [2024-10-01 03:42:51.923583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:15:59.696 [2024-10-01 03:42:51.923596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.216 ms 00:15:59.696 [2024-10-01 03:42:51.923604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.696 [2024-10-01 03:42:51.923656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.696 [2024-10-01 03:42:51.923666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:15:59.696 [2024-10-01 03:42:51.923680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:15:59.696 [2024-10-01 03:42:51.923687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.696 [2024-10-01 03:42:51.923775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:15:59.696 [2024-10-01 03:42:51.923787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:15:59.696 [2024-10-01 03:42:51.923796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:15:59.696 [2024-10-01 03:42:51.923807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:15:59.696 [2024-10-01 03:42:51.924836] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3023.455 ms, result 0 00:15:59.696 { 00:15:59.696 "name": "ftl0", 00:15:59.696 "uuid": "65f72475-10d4-4cde-a697-8a55dab41d8a" 00:15:59.696 } 00:15:59.696 03:42:51 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:15:59.696 03:42:51 ftl.ftl_fio_basic -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:15:59.696 03:42:51 ftl.ftl_fio_basic -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:59.696 03:42:51 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local i 00:15:59.696 03:42:51 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:59.696 03:42:51 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:59.696 03:42:51 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:59.696 03:42:52 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:15:59.954 [ 00:15:59.954 { 00:15:59.954 "name": "ftl0", 00:15:59.954 "aliases": [ 00:15:59.954 "65f72475-10d4-4cde-a697-8a55dab41d8a" 00:15:59.954 ], 00:15:59.954 "product_name": "FTL disk", 00:15:59.954 "block_size": 4096, 00:15:59.954 "num_blocks": 20971520, 00:15:59.954 "uuid": "65f72475-10d4-4cde-a697-8a55dab41d8a", 00:15:59.954 "assigned_rate_limits": { 00:15:59.954 "rw_ios_per_sec": 0, 00:15:59.954 "rw_mbytes_per_sec": 0, 00:15:59.954 "r_mbytes_per_sec": 0, 00:15:59.954 "w_mbytes_per_sec": 0 00:15:59.954 }, 00:15:59.954 "claimed": false, 00:15:59.954 "zoned": false, 00:15:59.954 "supported_io_types": { 00:15:59.955 "read": true, 00:15:59.955 "write": true, 00:15:59.955 "unmap": true, 00:15:59.955 "flush": true, 00:15:59.955 "reset": false, 00:15:59.955 "nvme_admin": false, 00:15:59.955 "nvme_io": false, 00:15:59.955 "nvme_io_md": false, 00:15:59.955 "write_zeroes": true, 00:15:59.955 "zcopy": false, 00:15:59.955 "get_zone_info": false, 00:15:59.955 "zone_management": false, 00:15:59.955 "zone_append": false, 00:15:59.955 "compare": false, 00:15:59.955 "compare_and_write": false, 00:15:59.955 "abort": false, 00:15:59.955 "seek_hole": false, 00:15:59.955 "seek_data": false, 00:15:59.955 "copy": false, 00:15:59.955 "nvme_iov_md": false 00:15:59.955 }, 00:15:59.955 "driver_specific": { 00:15:59.955 "ftl": { 00:15:59.955 "base_bdev": "65082f1d-6171-4f02-be6b-072c51e8b56e", 00:15:59.955 "cache": "nvc0n1p0" 00:15:59.955 } 00:15:59.955 } 00:15:59.955 } 00:15:59.955 ] 00:15:59.955 03:42:52 ftl.ftl_fio_basic -- common/autotest_common.sh@907 -- # return 0 00:15:59.955 03:42:52 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:15:59.955 03:42:52 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:16:00.213 03:42:52 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:16:00.213 03:42:52 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:16:00.213 [2024-10-01 03:42:52.749581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.213 [2024-10-01 03:42:52.749643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:16:00.213 [2024-10-01 03:42:52.749656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:16:00.213 [2024-10-01 03:42:52.749664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.213 [2024-10-01 03:42:52.749692] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:16:00.213 [2024-10-01 03:42:52.751949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.213 [2024-10-01 03:42:52.751978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:16:00.213 [2024-10-01 03:42:52.751989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.239 ms 00:16:00.213 [2024-10-01 03:42:52.751996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.213 [2024-10-01 03:42:52.752347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.213 [2024-10-01 03:42:52.752361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:16:00.213 [2024-10-01 03:42:52.752371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.310 ms 00:16:00.213 [2024-10-01 03:42:52.752379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.213 [2024-10-01 03:42:52.754843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.213 [2024-10-01 03:42:52.754862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:16:00.213 [2024-10-01 03:42:52.754872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:16:00.213 [2024-10-01 03:42:52.754878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.213 [2024-10-01 03:42:52.759544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.213 [2024-10-01 03:42:52.759570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:16:00.213 [2024-10-01 03:42:52.759579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.646 ms 00:16:00.213 [2024-10-01 03:42:52.759586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.472 [2024-10-01 03:42:52.779007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.473 [2024-10-01 03:42:52.779042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:16:00.473 [2024-10-01 03:42:52.779053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.344 ms 00:16:00.473 [2024-10-01 03:42:52.779060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.473 [2024-10-01 03:42:52.791418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.473 [2024-10-01 03:42:52.791450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:16:00.473 [2024-10-01 03:42:52.791462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.306 ms 00:16:00.473 [2024-10-01 03:42:52.791469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.473 [2024-10-01 03:42:52.791604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.473 [2024-10-01 03:42:52.791613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:16:00.473 [2024-10-01 03:42:52.791622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:16:00.473 [2024-10-01 03:42:52.791631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.473 [2024-10-01 03:42:52.809587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.473 [2024-10-01 03:42:52.809632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:16:00.473 [2024-10-01 03:42:52.809642] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.931 ms 00:16:00.473 [2024-10-01 03:42:52.809649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.473 [2024-10-01 03:42:52.827162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.473 [2024-10-01 03:42:52.827196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:16:00.473 [2024-10-01 03:42:52.827207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.469 ms 00:16:00.473 [2024-10-01 03:42:52.827214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.473 [2024-10-01 03:42:52.844985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.473 [2024-10-01 03:42:52.845034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:16:00.473 [2024-10-01 03:42:52.845046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.724 ms 00:16:00.473 [2024-10-01 03:42:52.845052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.473 [2024-10-01 03:42:52.862221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.473 [2024-10-01 03:42:52.862255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:16:00.473 [2024-10-01 03:42:52.862267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.074 ms 00:16:00.473 [2024-10-01 03:42:52.862272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.473 [2024-10-01 03:42:52.862312] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:16:00.473 [2024-10-01 03:42:52.862335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:16:00.473 [2024-10-01 03:42:52.862652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.862999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.863018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.863026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.863033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.863040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.863046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.863057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.863064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.863073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.863079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.863087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:16:00.474 [2024-10-01 03:42:52.863100] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:16:00.474 [2024-10-01 03:42:52.863109] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 65f72475-10d4-4cde-a697-8a55dab41d8a 00:16:00.474 [2024-10-01 03:42:52.863116] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:16:00.474 [2024-10-01 03:42:52.863125] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:16:00.474 [2024-10-01 03:42:52.863132] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:16:00.474 [2024-10-01 03:42:52.863140] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:16:00.474 [2024-10-01 03:42:52.863145] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:16:00.474 [2024-10-01 03:42:52.863153] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:16:00.474 [2024-10-01 03:42:52.863160] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:16:00.474 [2024-10-01 03:42:52.863167] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:16:00.474 [2024-10-01 03:42:52.863172] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:16:00.474 [2024-10-01 03:42:52.863179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.474 [2024-10-01 03:42:52.863186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:16:00.474 [2024-10-01 03:42:52.863194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.868 ms 00:16:00.474 [2024-10-01 03:42:52.863201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.475 [2024-10-01 03:42:52.873142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.475 [2024-10-01 03:42:52.873172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:16:00.475 [2024-10-01 03:42:52.873184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.903 ms 00:16:00.475 [2024-10-01 03:42:52.873191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.475 [2024-10-01 03:42:52.873491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:00.475 [2024-10-01 03:42:52.873511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:16:00.475 [2024-10-01 03:42:52.873520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:16:00.475 [2024-10-01 03:42:52.873526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.475 [2024-10-01 03:42:52.909242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.475 [2024-10-01 03:42:52.909282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:00.475 [2024-10-01 03:42:52.909294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.475 [2024-10-01 03:42:52.909302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.475 [2024-10-01 03:42:52.909368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.475 [2024-10-01 03:42:52.909378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:00.475 [2024-10-01 03:42:52.909386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.475 [2024-10-01 03:42:52.909392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.475 [2024-10-01 03:42:52.909476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.475 [2024-10-01 03:42:52.909489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:00.475 [2024-10-01 03:42:52.909497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.475 [2024-10-01 03:42:52.909504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.475 [2024-10-01 03:42:52.909530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.475 [2024-10-01 03:42:52.909537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:00.475 [2024-10-01 03:42:52.909547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.475 [2024-10-01 03:42:52.909554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.475 [2024-10-01 03:42:52.975204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.475 [2024-10-01 03:42:52.975261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:00.475 [2024-10-01 03:42:52.975274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.475 [2024-10-01 03:42:52.975282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.733 [2024-10-01 03:42:53.025639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.733 [2024-10-01 03:42:53.025694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:00.734 [2024-10-01 03:42:53.025709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.734 [2024-10-01 03:42:53.025716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.734 [2024-10-01 03:42:53.025821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.734 [2024-10-01 03:42:53.025831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:00.734 [2024-10-01 03:42:53.025839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.734 [2024-10-01 03:42:53.025845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.734 [2024-10-01 03:42:53.025900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.734 [2024-10-01 03:42:53.025908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:00.734 [2024-10-01 03:42:53.025917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.734 [2024-10-01 03:42:53.025923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.734 [2024-10-01 03:42:53.026028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.734 [2024-10-01 03:42:53.026038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:00.734 [2024-10-01 03:42:53.026046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.734 [2024-10-01 03:42:53.026052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.734 [2024-10-01 03:42:53.026096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.734 [2024-10-01 03:42:53.026104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:16:00.734 [2024-10-01 03:42:53.026112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.734 [2024-10-01 03:42:53.026121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.734 [2024-10-01 03:42:53.026169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.734 [2024-10-01 03:42:53.026177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:00.734 [2024-10-01 03:42:53.026185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.734 [2024-10-01 03:42:53.026191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.734 [2024-10-01 03:42:53.026241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:16:00.734 [2024-10-01 03:42:53.026249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:00.734 [2024-10-01 03:42:53.026257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:16:00.734 [2024-10-01 03:42:53.026263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:00.734 [2024-10-01 03:42:53.026420] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 276.824 ms, result 0 00:16:00.734 true 00:16:00.734 03:42:53 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 72948 00:16:00.734 03:42:53 ftl.ftl_fio_basic -- common/autotest_common.sh@950 -- # '[' -z 72948 ']' 00:16:00.734 03:42:53 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # kill -0 72948 00:16:00.734 03:42:53 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # uname 00:16:00.734 03:42:53 ftl.ftl_fio_basic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:00.734 03:42:53 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72948 00:16:00.734 killing process with pid 72948 00:16:00.734 03:42:53 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:00.734 03:42:53 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:00.734 03:42:53 ftl.ftl_fio_basic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72948' 00:16:00.734 03:42:53 ftl.ftl_fio_basic -- common/autotest_common.sh@969 -- # kill 72948 00:16:00.734 03:42:53 ftl.ftl_fio_basic -- common/autotest_common.sh@974 -- # wait 72948 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:02.634 03:42:54 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:16:02.634 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:16:02.634 fio-3.35 00:16:02.634 Starting 1 thread 00:16:06.828 00:16:06.828 test: (groupid=0, jobs=1): err= 0: pid=73127: Tue Oct 1 03:42:58 2024 00:16:06.828 read: IOPS=1348, BW=89.6MiB/s (93.9MB/s)(255MiB/2842msec) 00:16:06.828 slat (nsec): min=4177, max=31367, avg=5676.98, stdev=2057.08 00:16:06.828 clat (usec): min=252, max=829, avg=334.05, stdev=41.83 00:16:06.828 lat (usec): min=257, max=834, avg=339.73, stdev=42.78 00:16:06.828 clat percentiles (usec): 00:16:06.829 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 310], 20.00th=[ 318], 00:16:06.829 | 30.00th=[ 322], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 326], 00:16:06.829 | 70.00th=[ 330], 80.00th=[ 334], 90.00th=[ 351], 95.00th=[ 429], 00:16:06.829 | 99.00th=[ 510], 99.50th=[ 570], 99.90th=[ 717], 99.95th=[ 742], 00:16:06.829 | 99.99th=[ 832] 00:16:06.829 write: IOPS=1358, BW=90.2MiB/s (94.6MB/s)(256MiB/2839msec); 0 zone resets 00:16:06.829 slat (nsec): min=15284, max=80445, avg=20246.86, stdev=3982.25 00:16:06.829 clat (usec): min=270, max=946, avg=365.08, stdev=58.72 00:16:06.829 lat (usec): min=290, max=967, avg=385.33, stdev=59.00 00:16:06.829 clat percentiles (usec): 00:16:06.829 | 1.00th=[ 310], 5.00th=[ 318], 10.00th=[ 334], 20.00th=[ 343], 00:16:06.829 | 30.00th=[ 347], 40.00th=[ 351], 50.00th=[ 351], 60.00th=[ 355], 00:16:06.829 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 412], 95.00th=[ 437], 00:16:06.829 | 99.00th=[ 676], 99.50th=[ 701], 99.90th=[ 807], 99.95th=[ 873], 00:16:06.829 | 99.99th=[ 947] 00:16:06.829 bw ( KiB/s): min=91800, max=93840, per=100.00%, avg=92561.60, stdev=857.99, samples=5 00:16:06.829 iops : min= 1350, max= 1380, avg=1361.20, stdev=12.62, samples=5 00:16:06.829 lat (usec) : 500=97.84%, 750=2.07%, 1000=0.09% 00:16:06.829 cpu : usr=99.23%, sys=0.07%, ctx=3, majf=0, minf=1169 00:16:06.829 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:06.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:06.829 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:06.829 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:06.829 00:16:06.829 Run status group 0 (all jobs): 00:16:06.829 READ: bw=89.6MiB/s (93.9MB/s), 89.6MiB/s-89.6MiB/s (93.9MB/s-93.9MB/s), io=255MiB (267MB), run=2842-2842msec 00:16:06.829 WRITE: bw=90.2MiB/s (94.6MB/s), 90.2MiB/s-90.2MiB/s (94.6MB/s-94.6MB/s), io=256MiB (269MB), run=2839-2839msec 00:16:08.203 ----------------------------------------------------- 00:16:08.203 Suppressions used: 00:16:08.203 count bytes template 00:16:08.203 1 5 /usr/src/fio/parse.c 00:16:08.203 1 8 libtcmalloc_minimal.so 00:16:08.203 1 904 libcrypto.so 00:16:08.203 ----------------------------------------------------- 00:16:08.203 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:08.203 03:43:00 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:16:08.203 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:08.203 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:08.203 fio-3.35 00:16:08.203 Starting 2 threads 00:16:34.736 00:16:34.736 first_half: (groupid=0, jobs=1): err= 0: pid=73219: Tue Oct 1 03:43:24 2024 00:16:34.736 read: IOPS=2820, BW=11.0MiB/s (11.6MB/s)(255MiB/23130msec) 00:16:34.736 slat (nsec): min=3005, max=52995, avg=5151.82, stdev=945.39 00:16:34.736 clat (usec): min=608, max=290061, avg=34128.03, stdev=17613.97 00:16:34.736 lat (usec): min=613, max=290069, avg=34133.18, stdev=17613.99 00:16:34.736 clat percentiles (msec): 00:16:34.736 | 1.00th=[ 8], 5.00th=[ 25], 10.00th=[ 31], 20.00th=[ 32], 00:16:34.736 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:16:34.736 | 70.00th=[ 33], 80.00th=[ 34], 90.00th=[ 38], 95.00th=[ 42], 00:16:34.736 | 99.00th=[ 138], 99.50th=[ 155], 99.90th=[ 205], 99.95th=[ 234], 00:16:34.736 | 99.99th=[ 288] 00:16:34.737 write: IOPS=3338, BW=13.0MiB/s (13.7MB/s)(256MiB/19629msec); 0 zone resets 00:16:34.737 slat (usec): min=3, max=305, avg= 6.59, stdev= 3.11 00:16:34.737 clat (usec): min=363, max=83560, avg=11148.00, stdev=18075.33 00:16:34.737 lat (usec): min=373, max=83565, avg=11154.59, stdev=18075.42 00:16:34.737 clat percentiles (usec): 00:16:34.737 | 1.00th=[ 668], 5.00th=[ 758], 10.00th=[ 848], 20.00th=[ 1172], 00:16:34.737 | 30.00th=[ 2835], 40.00th=[ 3818], 50.00th=[ 5080], 60.00th=[ 5735], 00:16:34.737 | 70.00th=[ 6783], 80.00th=[11731], 90.00th=[32900], 95.00th=[63701], 00:16:34.737 | 99.00th=[71828], 99.50th=[73925], 99.90th=[80217], 99.95th=[81265], 00:16:34.737 | 99.99th=[83362] 00:16:34.737 bw ( KiB/s): min= 920, max=43160, per=85.35%, avg=22798.39, stdev=13195.48, samples=23 00:16:34.737 iops : min= 230, max=10790, avg=5699.57, stdev=3298.84, samples=23 00:16:34.737 lat (usec) : 500=0.03%, 750=2.24%, 1000=5.49% 00:16:34.737 lat (msec) : 2=5.02%, 4=8.28%, 10=18.71%, 20=6.71%, 50=47.07% 00:16:34.737 lat (msec) : 100=5.53%, 250=0.92%, 500=0.01% 00:16:34.737 cpu : usr=99.25%, sys=0.12%, ctx=56, majf=0, minf=5565 00:16:34.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:34.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.737 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.737 issued rwts: total=65242,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.737 second_half: (groupid=0, jobs=1): err= 0: pid=73220: Tue Oct 1 03:43:24 2024 00:16:34.737 read: IOPS=2833, BW=11.1MiB/s (11.6MB/s)(254MiB/22990msec) 00:16:34.737 slat (usec): min=3, max=452, avg= 5.34, stdev= 2.86 00:16:34.737 clat (usec): min=634, max=286700, avg=34887.64, stdev=16358.79 00:16:34.737 lat (usec): min=639, max=286707, avg=34892.97, stdev=16358.87 00:16:34.737 clat percentiles (msec): 00:16:34.737 | 1.00th=[ 5], 5.00th=[ 29], 10.00th=[ 31], 20.00th=[ 32], 00:16:34.737 | 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 32], 60.00th=[ 32], 00:16:34.737 | 70.00th=[ 33], 80.00th=[ 35], 90.00th=[ 39], 95.00th=[ 44], 00:16:34.737 | 99.00th=[ 124], 99.50th=[ 144], 99.90th=[ 171], 99.95th=[ 213], 00:16:34.737 | 99.99th=[ 271] 00:16:34.737 write: IOPS=4416, BW=17.2MiB/s (18.1MB/s)(256MiB/14840msec); 0 zone resets 00:16:34.737 slat (usec): min=3, max=989, avg= 6.78, stdev= 5.71 00:16:34.737 clat (usec): min=372, max=83908, avg=10205.81, stdev=17726.63 00:16:34.737 lat (usec): min=378, max=83914, avg=10212.59, stdev=17726.63 00:16:34.737 clat percentiles (usec): 00:16:34.737 | 1.00th=[ 676], 5.00th=[ 766], 10.00th=[ 840], 20.00th=[ 1045], 00:16:34.737 | 30.00th=[ 1401], 40.00th=[ 3130], 50.00th=[ 4424], 60.00th=[ 5342], 00:16:34.737 | 70.00th=[ 6259], 80.00th=[11207], 90.00th=[15664], 95.00th=[63177], 00:16:34.737 | 99.00th=[71828], 99.50th=[73925], 99.90th=[81265], 99.95th=[81265], 00:16:34.737 | 99.99th=[82314] 00:16:34.737 bw ( KiB/s): min= 488, max=42944, per=93.47%, avg=24965.71, stdev=13256.40, samples=21 00:16:34.737 iops : min= 122, max=10736, avg=6241.52, stdev=3313.98, samples=21 00:16:34.737 lat (usec) : 500=0.01%, 750=2.05%, 1000=7.08% 00:16:34.737 lat (msec) : 2=7.76%, 4=6.73%, 10=15.62%, 20=7.19%, 50=46.77% 00:16:34.737 lat (msec) : 100=5.85%, 250=0.93%, 500=0.01% 00:16:34.737 cpu : usr=98.79%, sys=0.31%, ctx=125, majf=0, minf=5556 00:16:34.737 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:16:34.737 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:34.737 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:34.737 issued rwts: total=65143,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:34.737 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:34.737 00:16:34.737 Run status group 0 (all jobs): 00:16:34.737 READ: bw=22.0MiB/s (23.1MB/s), 11.0MiB/s-11.1MiB/s (11.6MB/s-11.6MB/s), io=509MiB (534MB), run=22990-23130msec 00:16:34.737 WRITE: bw=26.1MiB/s (27.3MB/s), 13.0MiB/s-17.2MiB/s (13.7MB/s-18.1MB/s), io=512MiB (537MB), run=14840-19629msec 00:16:34.737 ----------------------------------------------------- 00:16:34.737 Suppressions used: 00:16:34.737 count bytes template 00:16:34.737 2 10 /usr/src/fio/parse.c 00:16:34.737 2 192 /usr/src/fio/iolog.c 00:16:34.737 1 8 libtcmalloc_minimal.so 00:16:34.737 1 904 libcrypto.so 00:16:34.737 ----------------------------------------------------- 00:16:34.737 00:16:34.737 03:43:26 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:16:34.737 03:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:34.737 03:43:26 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local sanitizers 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # shift 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local asan_lib= 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # grep libasan 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # break 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:16:34.737 03:43:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:16:34.737 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:16:34.737 fio-3.35 00:16:34.737 Starting 1 thread 00:16:49.616 00:16:49.616 test: (groupid=0, jobs=1): err= 0: pid=73530: Tue Oct 1 03:43:39 2024 00:16:49.616 read: IOPS=8376, BW=32.7MiB/s (34.3MB/s)(255MiB/7784msec) 00:16:49.616 slat (nsec): min=2898, max=18655, avg=3438.44, stdev=647.87 00:16:49.616 clat (usec): min=471, max=32367, avg=15273.61, stdev=1751.91 00:16:49.616 lat (usec): min=474, max=32370, avg=15277.05, stdev=1751.93 00:16:49.616 clat percentiles (usec): 00:16:49.616 | 1.00th=[13042], 5.00th=[13304], 10.00th=[13435], 20.00th=[13698], 00:16:49.616 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15401], 60.00th=[15533], 00:16:49.616 | 70.00th=[15664], 80.00th=[15795], 90.00th=[16057], 95.00th=[16450], 00:16:49.616 | 99.00th=[23725], 99.50th=[25035], 99.90th=[28967], 99.95th=[30278], 00:16:49.616 | 99.99th=[31851] 00:16:49.616 write: IOPS=16.7k, BW=65.4MiB/s (68.5MB/s)(256MiB/3917msec); 0 zone resets 00:16:49.616 slat (usec): min=3, max=265, avg= 5.72, stdev= 2.40 00:16:49.616 clat (usec): min=435, max=42316, avg=7608.84, stdev=9421.50 00:16:49.616 lat (usec): min=440, max=42322, avg=7614.56, stdev=9421.50 00:16:49.616 clat percentiles (usec): 00:16:49.616 | 1.00th=[ 603], 5.00th=[ 685], 10.00th=[ 742], 20.00th=[ 857], 00:16:49.616 | 30.00th=[ 1004], 40.00th=[ 1385], 50.00th=[ 5342], 60.00th=[ 5932], 00:16:49.616 | 70.00th=[ 7046], 80.00th=[ 8225], 90.00th=[26870], 95.00th=[28967], 00:16:49.616 | 99.00th=[33817], 99.50th=[37487], 99.90th=[41157], 99.95th=[41681], 00:16:49.616 | 99.99th=[42206] 00:16:49.616 bw ( KiB/s): min=55128, max=87512, per=98.00%, avg=65587.50, stdev=11822.88, samples=8 00:16:49.616 iops : min=13782, max=21878, avg=16396.75, stdev=2955.75, samples=8 00:16:49.616 lat (usec) : 500=0.03%, 750=5.56%, 1000=9.31% 00:16:49.616 lat (msec) : 2=5.72%, 4=0.58%, 10=20.77%, 20=48.82%, 50=9.22% 00:16:49.617 cpu : usr=99.21%, sys=0.14%, ctx=45, majf=0, minf=5565 00:16:49.617 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:16:49.617 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:49.617 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:49.617 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:49.617 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:49.617 00:16:49.617 Run status group 0 (all jobs): 00:16:49.617 READ: bw=32.7MiB/s (34.3MB/s), 32.7MiB/s-32.7MiB/s (34.3MB/s-34.3MB/s), io=255MiB (267MB), run=7784-7784msec 00:16:49.617 WRITE: bw=65.4MiB/s (68.5MB/s), 65.4MiB/s-65.4MiB/s (68.5MB/s-68.5MB/s), io=256MiB (268MB), run=3917-3917msec 00:16:49.617 ----------------------------------------------------- 00:16:49.617 Suppressions used: 00:16:49.617 count bytes template 00:16:49.617 1 5 /usr/src/fio/parse.c 00:16:49.617 2 192 /usr/src/fio/iolog.c 00:16:49.617 1 8 libtcmalloc_minimal.so 00:16:49.617 1 904 libcrypto.so 00:16:49.617 ----------------------------------------------------- 00:16:49.617 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:16:49.617 Remove shared memory files 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57521 /dev/shm/spdk_tgt_trace.pid71859 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:16:49.617 00:16:49.617 real 0m56.381s 00:16:49.617 user 2m1.353s 00:16:49.617 sys 0m2.700s 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:49.617 03:43:41 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:16:49.617 ************************************ 00:16:49.617 END TEST ftl_fio_basic 00:16:49.617 ************************************ 00:16:49.617 03:43:41 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:49.617 03:43:41 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:49.617 03:43:41 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:49.617 03:43:41 ftl -- common/autotest_common.sh@10 -- # set +x 00:16:49.617 ************************************ 00:16:49.617 START TEST ftl_bdevperf 00:16:49.617 ************************************ 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:16:49.617 * Looking for test storage... 00:16:49.617 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lcov --version 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:16:49.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.617 --rc genhtml_branch_coverage=1 00:16:49.617 --rc genhtml_function_coverage=1 00:16:49.617 --rc genhtml_legend=1 00:16:49.617 --rc geninfo_all_blocks=1 00:16:49.617 --rc geninfo_unexecuted_blocks=1 00:16:49.617 00:16:49.617 ' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:16:49.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.617 --rc genhtml_branch_coverage=1 00:16:49.617 --rc genhtml_function_coverage=1 00:16:49.617 --rc genhtml_legend=1 00:16:49.617 --rc geninfo_all_blocks=1 00:16:49.617 --rc geninfo_unexecuted_blocks=1 00:16:49.617 00:16:49.617 ' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:16:49.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.617 --rc genhtml_branch_coverage=1 00:16:49.617 --rc genhtml_function_coverage=1 00:16:49.617 --rc genhtml_legend=1 00:16:49.617 --rc geninfo_all_blocks=1 00:16:49.617 --rc geninfo_unexecuted_blocks=1 00:16:49.617 00:16:49.617 ' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:16:49.617 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:49.617 --rc genhtml_branch_coverage=1 00:16:49.617 --rc genhtml_function_coverage=1 00:16:49.617 --rc genhtml_legend=1 00:16:49.617 --rc geninfo_all_blocks=1 00:16:49.617 --rc geninfo_unexecuted_blocks=1 00:16:49.617 00:16:49.617 ' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:49.617 03:43:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:16:49.618 03:43:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=73758 00:16:49.618 03:43:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:16:49.618 03:43:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 73758 00:16:49.618 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 73758 ']' 00:16:49.618 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:49.618 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:49.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:49.618 03:43:41 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:16:49.618 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:49.618 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:49.618 03:43:41 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:16:49.618 [2024-10-01 03:43:41.786311] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:16:49.618 [2024-10-01 03:43:41.786442] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73758 ] 00:16:49.618 [2024-10-01 03:43:41.933775] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.618 [2024-10-01 03:43:42.110032] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:16:50.186 03:43:42 ftl.ftl_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:50.186 03:43:42 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:16:50.186 03:43:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:16:50.186 03:43:42 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:16:50.186 03:43:42 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:16:50.186 03:43:42 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:16:50.186 03:43:42 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:16:50.186 03:43:42 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:16:50.444 03:43:42 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:16:50.445 03:43:42 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:16:50.445 03:43:42 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:16:50.445 03:43:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:16:50.445 03:43:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:50.445 03:43:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:50.445 03:43:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:50.445 03:43:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:16:50.702 03:43:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:50.702 { 00:16:50.702 "name": "nvme0n1", 00:16:50.702 "aliases": [ 00:16:50.702 "dd1b2bc1-50a7-4a92-9986-8d66d5133e72" 00:16:50.702 ], 00:16:50.702 "product_name": "NVMe disk", 00:16:50.702 "block_size": 4096, 00:16:50.702 "num_blocks": 1310720, 00:16:50.702 "uuid": "dd1b2bc1-50a7-4a92-9986-8d66d5133e72", 00:16:50.702 "numa_id": -1, 00:16:50.702 "assigned_rate_limits": { 00:16:50.702 "rw_ios_per_sec": 0, 00:16:50.702 "rw_mbytes_per_sec": 0, 00:16:50.702 "r_mbytes_per_sec": 0, 00:16:50.702 "w_mbytes_per_sec": 0 00:16:50.702 }, 00:16:50.702 "claimed": true, 00:16:50.702 "claim_type": "read_many_write_one", 00:16:50.702 "zoned": false, 00:16:50.702 "supported_io_types": { 00:16:50.702 "read": true, 00:16:50.702 "write": true, 00:16:50.702 "unmap": true, 00:16:50.702 "flush": true, 00:16:50.702 "reset": true, 00:16:50.702 "nvme_admin": true, 00:16:50.702 "nvme_io": true, 00:16:50.702 "nvme_io_md": false, 00:16:50.702 "write_zeroes": true, 00:16:50.703 "zcopy": false, 00:16:50.703 "get_zone_info": false, 00:16:50.703 "zone_management": false, 00:16:50.703 "zone_append": false, 00:16:50.703 "compare": true, 00:16:50.703 "compare_and_write": false, 00:16:50.703 "abort": true, 00:16:50.703 "seek_hole": false, 00:16:50.703 "seek_data": false, 00:16:50.703 "copy": true, 00:16:50.703 "nvme_iov_md": false 00:16:50.703 }, 00:16:50.703 "driver_specific": { 00:16:50.703 "nvme": [ 00:16:50.703 { 00:16:50.703 "pci_address": "0000:00:11.0", 00:16:50.703 "trid": { 00:16:50.703 "trtype": "PCIe", 00:16:50.703 "traddr": "0000:00:11.0" 00:16:50.703 }, 00:16:50.703 "ctrlr_data": { 00:16:50.703 "cntlid": 0, 00:16:50.703 "vendor_id": "0x1b36", 00:16:50.703 "model_number": "QEMU NVMe Ctrl", 00:16:50.703 "serial_number": "12341", 00:16:50.703 "firmware_revision": "8.0.0", 00:16:50.703 "subnqn": "nqn.2019-08.org.qemu:12341", 00:16:50.703 "oacs": { 00:16:50.703 "security": 0, 00:16:50.703 "format": 1, 00:16:50.703 "firmware": 0, 00:16:50.703 "ns_manage": 1 00:16:50.703 }, 00:16:50.703 "multi_ctrlr": false, 00:16:50.703 "ana_reporting": false 00:16:50.703 }, 00:16:50.703 "vs": { 00:16:50.703 "nvme_version": "1.4" 00:16:50.703 }, 00:16:50.703 "ns_data": { 00:16:50.703 "id": 1, 00:16:50.703 "can_share": false 00:16:50.703 } 00:16:50.703 } 00:16:50.703 ], 00:16:50.703 "mp_policy": "active_passive" 00:16:50.703 } 00:16:50.703 } 00:16:50.703 ]' 00:16:50.703 03:43:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:50.703 03:43:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:50.703 03:43:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:50.703 03:43:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=1310720 00:16:50.703 03:43:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:16:50.703 03:43:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 5120 00:16:50.703 03:43:43 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:16:50.703 03:43:43 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:16:50.703 03:43:43 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:16:50.703 03:43:43 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:16:50.703 03:43:43 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:16:50.961 03:43:43 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=a32c9bb9-d754-4627-abf1-a5295b200573 00:16:50.961 03:43:43 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:16:50.961 03:43:43 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a32c9bb9-d754-4627-abf1-a5295b200573 00:16:51.220 03:43:43 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:16:51.479 03:43:43 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=572535e1-2114-4d96-b700-2c17be5fcb48 00:16:51.479 03:43:43 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 572535e1-2114-4d96-b700-2c17be5fcb48 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size 55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:51.479 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:51.738 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:51.738 { 00:16:51.738 "name": "55e79403-f6fe-4ddf-825c-629d0a79685d", 00:16:51.738 "aliases": [ 00:16:51.738 "lvs/nvme0n1p0" 00:16:51.738 ], 00:16:51.738 "product_name": "Logical Volume", 00:16:51.738 "block_size": 4096, 00:16:51.738 "num_blocks": 26476544, 00:16:51.738 "uuid": "55e79403-f6fe-4ddf-825c-629d0a79685d", 00:16:51.738 "assigned_rate_limits": { 00:16:51.738 "rw_ios_per_sec": 0, 00:16:51.738 "rw_mbytes_per_sec": 0, 00:16:51.738 "r_mbytes_per_sec": 0, 00:16:51.738 "w_mbytes_per_sec": 0 00:16:51.738 }, 00:16:51.738 "claimed": false, 00:16:51.738 "zoned": false, 00:16:51.738 "supported_io_types": { 00:16:51.738 "read": true, 00:16:51.738 "write": true, 00:16:51.738 "unmap": true, 00:16:51.738 "flush": false, 00:16:51.738 "reset": true, 00:16:51.738 "nvme_admin": false, 00:16:51.738 "nvme_io": false, 00:16:51.738 "nvme_io_md": false, 00:16:51.738 "write_zeroes": true, 00:16:51.738 "zcopy": false, 00:16:51.738 "get_zone_info": false, 00:16:51.738 "zone_management": false, 00:16:51.738 "zone_append": false, 00:16:51.738 "compare": false, 00:16:51.738 "compare_and_write": false, 00:16:51.738 "abort": false, 00:16:51.738 "seek_hole": true, 00:16:51.738 "seek_data": true, 00:16:51.738 "copy": false, 00:16:51.738 "nvme_iov_md": false 00:16:51.738 }, 00:16:51.738 "driver_specific": { 00:16:51.738 "lvol": { 00:16:51.738 "lvol_store_uuid": "572535e1-2114-4d96-b700-2c17be5fcb48", 00:16:51.738 "base_bdev": "nvme0n1", 00:16:51.738 "thin_provision": true, 00:16:51.738 "num_allocated_clusters": 0, 00:16:51.738 "snapshot": false, 00:16:51.738 "clone": false, 00:16:51.738 "esnap_clone": false 00:16:51.738 } 00:16:51.738 } 00:16:51.738 } 00:16:51.738 ]' 00:16:51.738 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:51.738 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:51.738 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:51.738 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:51.738 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:51.738 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:51.738 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:16:51.738 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:16:51.738 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:16:51.997 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:16:51.997 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:16:51.997 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size 55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:51.997 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:51.997 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:51.997 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:51.997 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:51.997 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:52.256 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:52.256 { 00:16:52.256 "name": "55e79403-f6fe-4ddf-825c-629d0a79685d", 00:16:52.256 "aliases": [ 00:16:52.256 "lvs/nvme0n1p0" 00:16:52.256 ], 00:16:52.256 "product_name": "Logical Volume", 00:16:52.256 "block_size": 4096, 00:16:52.256 "num_blocks": 26476544, 00:16:52.256 "uuid": "55e79403-f6fe-4ddf-825c-629d0a79685d", 00:16:52.256 "assigned_rate_limits": { 00:16:52.256 "rw_ios_per_sec": 0, 00:16:52.257 "rw_mbytes_per_sec": 0, 00:16:52.257 "r_mbytes_per_sec": 0, 00:16:52.257 "w_mbytes_per_sec": 0 00:16:52.257 }, 00:16:52.257 "claimed": false, 00:16:52.257 "zoned": false, 00:16:52.257 "supported_io_types": { 00:16:52.257 "read": true, 00:16:52.257 "write": true, 00:16:52.257 "unmap": true, 00:16:52.257 "flush": false, 00:16:52.257 "reset": true, 00:16:52.257 "nvme_admin": false, 00:16:52.257 "nvme_io": false, 00:16:52.257 "nvme_io_md": false, 00:16:52.257 "write_zeroes": true, 00:16:52.257 "zcopy": false, 00:16:52.257 "get_zone_info": false, 00:16:52.257 "zone_management": false, 00:16:52.257 "zone_append": false, 00:16:52.257 "compare": false, 00:16:52.257 "compare_and_write": false, 00:16:52.257 "abort": false, 00:16:52.257 "seek_hole": true, 00:16:52.257 "seek_data": true, 00:16:52.257 "copy": false, 00:16:52.257 "nvme_iov_md": false 00:16:52.257 }, 00:16:52.257 "driver_specific": { 00:16:52.257 "lvol": { 00:16:52.257 "lvol_store_uuid": "572535e1-2114-4d96-b700-2c17be5fcb48", 00:16:52.257 "base_bdev": "nvme0n1", 00:16:52.257 "thin_provision": true, 00:16:52.257 "num_allocated_clusters": 0, 00:16:52.257 "snapshot": false, 00:16:52.257 "clone": false, 00:16:52.257 "esnap_clone": false 00:16:52.257 } 00:16:52.257 } 00:16:52.257 } 00:16:52.257 ]' 00:16:52.257 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:52.257 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:52.257 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:52.257 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:52.257 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:52.257 03:43:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:52.257 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:16:52.257 03:43:44 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:16:52.516 03:43:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:16:52.516 03:43:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size 55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:52.516 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1378 -- # local bdev_name=55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:52.516 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1379 -- # local bdev_info 00:16:52.516 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bs 00:16:52.516 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local nb 00:16:52.516 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 55e79403-f6fe-4ddf-825c-629d0a79685d 00:16:52.775 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:16:52.775 { 00:16:52.775 "name": "55e79403-f6fe-4ddf-825c-629d0a79685d", 00:16:52.775 "aliases": [ 00:16:52.775 "lvs/nvme0n1p0" 00:16:52.775 ], 00:16:52.775 "product_name": "Logical Volume", 00:16:52.775 "block_size": 4096, 00:16:52.775 "num_blocks": 26476544, 00:16:52.775 "uuid": "55e79403-f6fe-4ddf-825c-629d0a79685d", 00:16:52.775 "assigned_rate_limits": { 00:16:52.775 "rw_ios_per_sec": 0, 00:16:52.775 "rw_mbytes_per_sec": 0, 00:16:52.775 "r_mbytes_per_sec": 0, 00:16:52.775 "w_mbytes_per_sec": 0 00:16:52.775 }, 00:16:52.775 "claimed": false, 00:16:52.775 "zoned": false, 00:16:52.775 "supported_io_types": { 00:16:52.775 "read": true, 00:16:52.775 "write": true, 00:16:52.775 "unmap": true, 00:16:52.775 "flush": false, 00:16:52.775 "reset": true, 00:16:52.775 "nvme_admin": false, 00:16:52.775 "nvme_io": false, 00:16:52.775 "nvme_io_md": false, 00:16:52.775 "write_zeroes": true, 00:16:52.775 "zcopy": false, 00:16:52.775 "get_zone_info": false, 00:16:52.775 "zone_management": false, 00:16:52.775 "zone_append": false, 00:16:52.775 "compare": false, 00:16:52.775 "compare_and_write": false, 00:16:52.775 "abort": false, 00:16:52.775 "seek_hole": true, 00:16:52.775 "seek_data": true, 00:16:52.775 "copy": false, 00:16:52.775 "nvme_iov_md": false 00:16:52.775 }, 00:16:52.775 "driver_specific": { 00:16:52.775 "lvol": { 00:16:52.775 "lvol_store_uuid": "572535e1-2114-4d96-b700-2c17be5fcb48", 00:16:52.775 "base_bdev": "nvme0n1", 00:16:52.775 "thin_provision": true, 00:16:52.775 "num_allocated_clusters": 0, 00:16:52.775 "snapshot": false, 00:16:52.775 "clone": false, 00:16:52.775 "esnap_clone": false 00:16:52.775 } 00:16:52.775 } 00:16:52.775 } 00:16:52.775 ]' 00:16:52.775 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:16:52.775 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # bs=4096 00:16:52.775 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:16:52.775 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # nb=26476544 00:16:52.775 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:16:52.775 03:43:45 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # echo 103424 00:16:52.775 03:43:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:16:52.775 03:43:45 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 55e79403-f6fe-4ddf-825c-629d0a79685d -c nvc0n1p0 --l2p_dram_limit 20 00:16:53.056 [2024-10-01 03:43:45.469221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.056 [2024-10-01 03:43:45.469280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:16:53.056 [2024-10-01 03:43:45.469293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:16:53.056 [2024-10-01 03:43:45.469301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.056 [2024-10-01 03:43:45.469352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.056 [2024-10-01 03:43:45.469361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:16:53.056 [2024-10-01 03:43:45.469368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:16:53.056 [2024-10-01 03:43:45.469390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.056 [2024-10-01 03:43:45.469405] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:16:53.056 [2024-10-01 03:43:45.470042] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:16:53.056 [2024-10-01 03:43:45.470056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.056 [2024-10-01 03:43:45.470065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:16:53.056 [2024-10-01 03:43:45.470072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.656 ms 00:16:53.056 [2024-10-01 03:43:45.470081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.056 [2024-10-01 03:43:45.470136] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 8cdaa854-e99a-42ea-93f2-dd67f4cbacbd 00:16:53.056 [2024-10-01 03:43:45.471460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.056 [2024-10-01 03:43:45.471491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:16:53.056 [2024-10-01 03:43:45.471504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:16:53.056 [2024-10-01 03:43:45.471511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.056 [2024-10-01 03:43:45.478340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.056 [2024-10-01 03:43:45.478371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:16:53.056 [2024-10-01 03:43:45.478381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.795 ms 00:16:53.056 [2024-10-01 03:43:45.478387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.057 [2024-10-01 03:43:45.478478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.057 [2024-10-01 03:43:45.478486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:16:53.057 [2024-10-01 03:43:45.478498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:16:53.057 [2024-10-01 03:43:45.478504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.057 [2024-10-01 03:43:45.478556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.057 [2024-10-01 03:43:45.478563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:16:53.057 [2024-10-01 03:43:45.478573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:16:53.057 [2024-10-01 03:43:45.478579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.057 [2024-10-01 03:43:45.478599] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:16:53.057 [2024-10-01 03:43:45.481962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.057 [2024-10-01 03:43:45.481993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:16:53.057 [2024-10-01 03:43:45.482009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.371 ms 00:16:53.057 [2024-10-01 03:43:45.482017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.057 [2024-10-01 03:43:45.482044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.057 [2024-10-01 03:43:45.482052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:16:53.057 [2024-10-01 03:43:45.482060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:16:53.057 [2024-10-01 03:43:45.482067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.057 [2024-10-01 03:43:45.482095] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:16:53.057 [2024-10-01 03:43:45.482211] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:16:53.057 [2024-10-01 03:43:45.482220] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:16:53.057 [2024-10-01 03:43:45.482230] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:16:53.057 [2024-10-01 03:43:45.482239] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:16:53.057 [2024-10-01 03:43:45.482248] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:16:53.057 [2024-10-01 03:43:45.482254] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:16:53.057 [2024-10-01 03:43:45.482263] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:16:53.057 [2024-10-01 03:43:45.482268] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:16:53.057 [2024-10-01 03:43:45.482276] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:16:53.057 [2024-10-01 03:43:45.482282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.057 [2024-10-01 03:43:45.482290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:16:53.057 [2024-10-01 03:43:45.482297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.188 ms 00:16:53.057 [2024-10-01 03:43:45.482304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.057 [2024-10-01 03:43:45.482369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.057 [2024-10-01 03:43:45.482377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:16:53.057 [2024-10-01 03:43:45.482383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:16:53.057 [2024-10-01 03:43:45.482399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.057 [2024-10-01 03:43:45.482472] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:16:53.057 [2024-10-01 03:43:45.482481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:16:53.057 [2024-10-01 03:43:45.482487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:53.057 [2024-10-01 03:43:45.482495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:16:53.057 [2024-10-01 03:43:45.482507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:16:53.057 [2024-10-01 03:43:45.482520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:16:53.057 [2024-10-01 03:43:45.482525] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:53.057 [2024-10-01 03:43:45.482536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:16:53.057 [2024-10-01 03:43:45.482549] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:16:53.057 [2024-10-01 03:43:45.482554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:16:53.057 [2024-10-01 03:43:45.482560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:16:53.057 [2024-10-01 03:43:45.482566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:16:53.057 [2024-10-01 03:43:45.482575] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:16:53.057 [2024-10-01 03:43:45.482590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:16:53.057 [2024-10-01 03:43:45.482595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:16:53.057 [2024-10-01 03:43:45.482606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482612] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:53.057 [2024-10-01 03:43:45.482618] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:16:53.057 [2024-10-01 03:43:45.482624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:53.057 [2024-10-01 03:43:45.482635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:16:53.057 [2024-10-01 03:43:45.482640] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482646] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:53.057 [2024-10-01 03:43:45.482651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:16:53.057 [2024-10-01 03:43:45.482657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482662] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:16:53.057 [2024-10-01 03:43:45.482671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:16:53.057 [2024-10-01 03:43:45.482675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:53.057 [2024-10-01 03:43:45.482687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:16:53.057 [2024-10-01 03:43:45.482693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:16:53.057 [2024-10-01 03:43:45.482698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:16:53.057 [2024-10-01 03:43:45.482704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:16:53.057 [2024-10-01 03:43:45.482709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:16:53.057 [2024-10-01 03:43:45.482715] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:16:53.057 [2024-10-01 03:43:45.482727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:16:53.057 [2024-10-01 03:43:45.482732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482740] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:16:53.057 [2024-10-01 03:43:45.482746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:16:53.057 [2024-10-01 03:43:45.482754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:16:53.057 [2024-10-01 03:43:45.482759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:16:53.057 [2024-10-01 03:43:45.482767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:16:53.057 [2024-10-01 03:43:45.482773] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:16:53.057 [2024-10-01 03:43:45.482780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:16:53.057 [2024-10-01 03:43:45.482786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:16:53.057 [2024-10-01 03:43:45.482793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:16:53.057 [2024-10-01 03:43:45.482798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:16:53.057 [2024-10-01 03:43:45.482808] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:16:53.057 [2024-10-01 03:43:45.482817] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:53.057 [2024-10-01 03:43:45.482825] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:16:53.057 [2024-10-01 03:43:45.482831] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:16:53.057 [2024-10-01 03:43:45.482838] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:16:53.057 [2024-10-01 03:43:45.482843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:16:53.057 [2024-10-01 03:43:45.482850] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:16:53.057 [2024-10-01 03:43:45.482855] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:16:53.057 [2024-10-01 03:43:45.482862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:16:53.057 [2024-10-01 03:43:45.482867] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:16:53.057 [2024-10-01 03:43:45.482876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:16:53.057 [2024-10-01 03:43:45.482882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:16:53.057 [2024-10-01 03:43:45.482890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:16:53.057 [2024-10-01 03:43:45.482895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:16:53.058 [2024-10-01 03:43:45.482902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:16:53.058 [2024-10-01 03:43:45.482907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:16:53.058 [2024-10-01 03:43:45.482914] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:16:53.058 [2024-10-01 03:43:45.482920] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:16:53.058 [2024-10-01 03:43:45.482928] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:16:53.058 [2024-10-01 03:43:45.482933] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:16:53.058 [2024-10-01 03:43:45.482940] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:16:53.058 [2024-10-01 03:43:45.482945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:16:53.058 [2024-10-01 03:43:45.482953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:53.058 [2024-10-01 03:43:45.482959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:16:53.058 [2024-10-01 03:43:45.482967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.534 ms 00:16:53.058 [2024-10-01 03:43:45.482972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:53.058 [2024-10-01 03:43:45.483022] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:16:53.058 [2024-10-01 03:43:45.483033] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:16:55.643 [2024-10-01 03:43:47.744016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.744091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:16:55.643 [2024-10-01 03:43:47.744110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2260.964 ms 00:16:55.643 [2024-10-01 03:43:47.744120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.783344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.783401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:16:55.643 [2024-10-01 03:43:47.783419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.960 ms 00:16:55.643 [2024-10-01 03:43:47.783427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.783586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.783598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:16:55.643 [2024-10-01 03:43:47.783614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:16:55.643 [2024-10-01 03:43:47.783622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.816469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.816518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:16:55.643 [2024-10-01 03:43:47.816537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.811 ms 00:16:55.643 [2024-10-01 03:43:47.816545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.816585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.816593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:16:55.643 [2024-10-01 03:43:47.816604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:16:55.643 [2024-10-01 03:43:47.816612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.817088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.817105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:16:55.643 [2024-10-01 03:43:47.817116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:16:55.643 [2024-10-01 03:43:47.817124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.817248] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.817259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:16:55.643 [2024-10-01 03:43:47.817271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:16:55.643 [2024-10-01 03:43:47.817279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.830651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.830684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:16:55.643 [2024-10-01 03:43:47.830696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.353 ms 00:16:55.643 [2024-10-01 03:43:47.830703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.842953] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:16:55.643 [2024-10-01 03:43:47.848991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.849046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:16:55.643 [2024-10-01 03:43:47.849056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.212 ms 00:16:55.643 [2024-10-01 03:43:47.849066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.907450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.907508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:16:55.643 [2024-10-01 03:43:47.907521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.355 ms 00:16:55.643 [2024-10-01 03:43:47.907531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.907719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.907735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:16:55.643 [2024-10-01 03:43:47.907744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:16:55.643 [2024-10-01 03:43:47.907753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.931090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.931138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:16:55.643 [2024-10-01 03:43:47.931149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.296 ms 00:16:55.643 [2024-10-01 03:43:47.931162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.954175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.954213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:16:55.643 [2024-10-01 03:43:47.954225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.977 ms 00:16:55.643 [2024-10-01 03:43:47.954234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:47.954820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:47.954840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:16:55.643 [2024-10-01 03:43:47.954852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:16:55.643 [2024-10-01 03:43:47.954861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:48.023666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:48.023712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:16:55.643 [2024-10-01 03:43:48.023724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.776 ms 00:16:55.643 [2024-10-01 03:43:48.023733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:48.048449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:48.048506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:16:55.643 [2024-10-01 03:43:48.048518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.637 ms 00:16:55.643 [2024-10-01 03:43:48.048529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:48.071560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:48.071600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:16:55.643 [2024-10-01 03:43:48.071611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.997 ms 00:16:55.643 [2024-10-01 03:43:48.071620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:48.095184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:48.095224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:16:55.643 [2024-10-01 03:43:48.095235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.530 ms 00:16:55.643 [2024-10-01 03:43:48.095245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:48.095281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:48.095294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:16:55.643 [2024-10-01 03:43:48.095303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:16:55.643 [2024-10-01 03:43:48.095313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:48.095394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:16:55.643 [2024-10-01 03:43:48.095409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:16:55.643 [2024-10-01 03:43:48.095417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:16:55.643 [2024-10-01 03:43:48.095427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:16:55.643 [2024-10-01 03:43:48.096379] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2626.688 ms, result 0 00:16:55.643 { 00:16:55.643 "name": "ftl0", 00:16:55.643 "uuid": "8cdaa854-e99a-42ea-93f2-dd67f4cbacbd" 00:16:55.643 } 00:16:55.643 03:43:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:16:55.643 03:43:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:16:55.643 03:43:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:16:55.901 03:43:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:16:55.901 [2024-10-01 03:43:48.400605] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:16:55.901 I/O size of 69632 is greater than zero copy threshold (65536). 00:16:55.901 Zero copy mechanism will not be used. 00:16:55.901 Running I/O for 4 seconds... 00:17:00.073 3418.00 IOPS, 226.98 MiB/s 3373.00 IOPS, 223.99 MiB/s 3391.00 IOPS, 225.18 MiB/s 3417.00 IOPS, 226.91 MiB/s 00:17:00.073 Latency(us) 00:17:00.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.073 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:17:00.073 ftl0 : 4.00 3415.32 226.80 0.00 0.00 307.22 151.24 2230.74 00:17:00.073 =================================================================================================================== 00:17:00.073 Total : 3415.32 226.80 0.00 0.00 307.22 151.24 2230.74 00:17:00.073 [2024-10-01 03:43:52.411406] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:00.073 { 00:17:00.073 "results": [ 00:17:00.073 { 00:17:00.073 "job": "ftl0", 00:17:00.073 "core_mask": "0x1", 00:17:00.073 "workload": "randwrite", 00:17:00.073 "status": "finished", 00:17:00.073 "queue_depth": 1, 00:17:00.073 "io_size": 69632, 00:17:00.073 "runtime": 4.002255, 00:17:00.073 "iops": 3415.3246107506893, 00:17:00.073 "mibps": 226.79889993266295, 00:17:00.073 "io_failed": 0, 00:17:00.073 "io_timeout": 0, 00:17:00.073 "avg_latency_us": 307.22328097829455, 00:17:00.073 "min_latency_us": 151.2369230769231, 00:17:00.073 "max_latency_us": 2230.7446153846154 00:17:00.073 } 00:17:00.073 ], 00:17:00.073 "core_count": 1 00:17:00.073 } 00:17:00.073 03:43:52 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:17:00.073 [2024-10-01 03:43:52.511141] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:00.073 Running I/O for 4 seconds... 00:17:04.248 11337.00 IOPS, 44.29 MiB/s 10943.50 IOPS, 42.75 MiB/s 10785.67 IOPS, 42.13 MiB/s 10724.25 IOPS, 41.89 MiB/s 00:17:04.248 Latency(us) 00:17:04.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.248 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:17:04.248 ftl0 : 4.01 10723.62 41.89 0.00 0.00 11916.49 233.16 31053.98 00:17:04.248 =================================================================================================================== 00:17:04.248 Total : 10723.62 41.89 0.00 0.00 11916.49 0.00 31053.98 00:17:04.248 [2024-10-01 03:43:56.531073] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:04.248 { 00:17:04.248 "results": [ 00:17:04.248 { 00:17:04.248 "job": "ftl0", 00:17:04.248 "core_mask": "0x1", 00:17:04.248 "workload": "randwrite", 00:17:04.248 "status": "finished", 00:17:04.248 "queue_depth": 128, 00:17:04.248 "io_size": 4096, 00:17:04.248 "runtime": 4.011706, 00:17:04.248 "iops": 10723.617333872422, 00:17:04.248 "mibps": 41.88913021043915, 00:17:04.248 "io_failed": 0, 00:17:04.248 "io_timeout": 0, 00:17:04.248 "avg_latency_us": 11916.490232950686, 00:17:04.248 "min_latency_us": 233.15692307692308, 00:17:04.248 "max_latency_us": 31053.98153846154 00:17:04.248 } 00:17:04.248 ], 00:17:04.248 "core_count": 1 00:17:04.248 } 00:17:04.248 03:43:56 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:17:04.248 [2024-10-01 03:43:56.655370] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:17:04.248 Running I/O for 4 seconds... 00:17:08.427 9561.00 IOPS, 37.35 MiB/s 9335.50 IOPS, 36.47 MiB/s 9332.00 IOPS, 36.45 MiB/s 9171.00 IOPS, 35.82 MiB/s 00:17:08.427 Latency(us) 00:17:08.427 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.427 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:17:08.427 Verification LBA range: start 0x0 length 0x1400000 00:17:08.427 ftl0 : 4.01 9182.70 35.87 0.00 0.00 13895.61 218.98 23794.61 00:17:08.427 =================================================================================================================== 00:17:08.427 Total : 9182.70 35.87 0.00 0.00 13895.61 0.00 23794.61 00:17:08.427 [2024-10-01 03:44:00.679493] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:17:08.427 { 00:17:08.427 "results": [ 00:17:08.427 { 00:17:08.427 "job": "ftl0", 00:17:08.427 "core_mask": "0x1", 00:17:08.427 "workload": "verify", 00:17:08.427 "status": "finished", 00:17:08.427 "verify_range": { 00:17:08.427 "start": 0, 00:17:08.427 "length": 20971520 00:17:08.427 }, 00:17:08.427 "queue_depth": 128, 00:17:08.427 "io_size": 4096, 00:17:08.427 "runtime": 4.008842, 00:17:08.427 "iops": 9182.701638029137, 00:17:08.427 "mibps": 35.86992827355132, 00:17:08.427 "io_failed": 0, 00:17:08.427 "io_timeout": 0, 00:17:08.427 "avg_latency_us": 13895.6099011192, 00:17:08.427 "min_latency_us": 218.97846153846154, 00:17:08.427 "max_latency_us": 23794.60923076923 00:17:08.427 } 00:17:08.427 ], 00:17:08.427 "core_count": 1 00:17:08.427 } 00:17:08.427 03:44:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:17:08.427 [2024-10-01 03:44:00.877709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.427 [2024-10-01 03:44:00.877773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:08.427 [2024-10-01 03:44:00.877788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:08.427 [2024-10-01 03:44:00.877799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.427 [2024-10-01 03:44:00.877821] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:08.427 [2024-10-01 03:44:00.880625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.427 [2024-10-01 03:44:00.880657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:08.427 [2024-10-01 03:44:00.880670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.784 ms 00:17:08.427 [2024-10-01 03:44:00.880678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.427 [2024-10-01 03:44:00.882547] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.427 [2024-10-01 03:44:00.882582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:08.427 [2024-10-01 03:44:00.882597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.840 ms 00:17:08.427 [2024-10-01 03:44:00.882605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.687 [2024-10-01 03:44:01.028155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.687 [2024-10-01 03:44:01.028222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:08.687 [2024-10-01 03:44:01.028241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 145.518 ms 00:17:08.687 [2024-10-01 03:44:01.028250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.687 [2024-10-01 03:44:01.034403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.687 [2024-10-01 03:44:01.034441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:08.687 [2024-10-01 03:44:01.034453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.118 ms 00:17:08.687 [2024-10-01 03:44:01.034461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.687 [2024-10-01 03:44:01.058837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.687 [2024-10-01 03:44:01.058875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:08.687 [2024-10-01 03:44:01.058888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.296 ms 00:17:08.687 [2024-10-01 03:44:01.058896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.687 [2024-10-01 03:44:01.073989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.687 [2024-10-01 03:44:01.074033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:08.687 [2024-10-01 03:44:01.074047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.057 ms 00:17:08.687 [2024-10-01 03:44:01.074055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.687 [2024-10-01 03:44:01.074192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.687 [2024-10-01 03:44:01.074203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:08.687 [2024-10-01 03:44:01.074218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.100 ms 00:17:08.687 [2024-10-01 03:44:01.074228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.687 [2024-10-01 03:44:01.097519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.687 [2024-10-01 03:44:01.097550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:08.687 [2024-10-01 03:44:01.097562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.273 ms 00:17:08.687 [2024-10-01 03:44:01.097570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.687 [2024-10-01 03:44:01.119942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.687 [2024-10-01 03:44:01.119972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:08.687 [2024-10-01 03:44:01.119983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.339 ms 00:17:08.687 [2024-10-01 03:44:01.119991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.687 [2024-10-01 03:44:01.142380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.687 [2024-10-01 03:44:01.142417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:08.687 [2024-10-01 03:44:01.142429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.349 ms 00:17:08.687 [2024-10-01 03:44:01.142437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.687 [2024-10-01 03:44:01.164352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.687 [2024-10-01 03:44:01.164385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:08.687 [2024-10-01 03:44:01.164399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.826 ms 00:17:08.687 [2024-10-01 03:44:01.164407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.687 [2024-10-01 03:44:01.164442] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:08.687 [2024-10-01 03:44:01.164460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:08.687 [2024-10-01 03:44:01.164882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164916] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.164992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:08.688 [2024-10-01 03:44:01.165372] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:08.688 [2024-10-01 03:44:01.165382] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 8cdaa854-e99a-42ea-93f2-dd67f4cbacbd 00:17:08.688 [2024-10-01 03:44:01.165393] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:08.688 [2024-10-01 03:44:01.165402] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:08.688 [2024-10-01 03:44:01.165409] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:08.688 [2024-10-01 03:44:01.165418] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:08.688 [2024-10-01 03:44:01.165426] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:08.688 [2024-10-01 03:44:01.165436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:08.688 [2024-10-01 03:44:01.165443] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:08.688 [2024-10-01 03:44:01.165454] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:08.688 [2024-10-01 03:44:01.165461] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:08.688 [2024-10-01 03:44:01.165470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.688 [2024-10-01 03:44:01.165477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:08.688 [2024-10-01 03:44:01.165489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.030 ms 00:17:08.688 [2024-10-01 03:44:01.165495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.688 [2024-10-01 03:44:01.178353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.688 [2024-10-01 03:44:01.178383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:08.688 [2024-10-01 03:44:01.178396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.826 ms 00:17:08.688 [2024-10-01 03:44:01.178404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.688 [2024-10-01 03:44:01.178769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:08.688 [2024-10-01 03:44:01.178788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:08.688 [2024-10-01 03:44:01.178801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.336 ms 00:17:08.688 [2024-10-01 03:44:01.178809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.688 [2024-10-01 03:44:01.210884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.688 [2024-10-01 03:44:01.210924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:08.688 [2024-10-01 03:44:01.210940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.688 [2024-10-01 03:44:01.210948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.688 [2024-10-01 03:44:01.211036] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.688 [2024-10-01 03:44:01.211048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:08.688 [2024-10-01 03:44:01.211058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.688 [2024-10-01 03:44:01.211066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.688 [2024-10-01 03:44:01.211143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.688 [2024-10-01 03:44:01.211154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:08.688 [2024-10-01 03:44:01.211164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.688 [2024-10-01 03:44:01.211172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.688 [2024-10-01 03:44:01.211189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.688 [2024-10-01 03:44:01.211196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:08.688 [2024-10-01 03:44:01.211208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.688 [2024-10-01 03:44:01.211215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.947 [2024-10-01 03:44:01.292380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.947 [2024-10-01 03:44:01.292439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:08.947 [2024-10-01 03:44:01.292456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.947 [2024-10-01 03:44:01.292464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.947 [2024-10-01 03:44:01.358103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.947 [2024-10-01 03:44:01.358165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:08.947 [2024-10-01 03:44:01.358183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.947 [2024-10-01 03:44:01.358191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.947 [2024-10-01 03:44:01.358305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.947 [2024-10-01 03:44:01.358316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:08.947 [2024-10-01 03:44:01.358326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.947 [2024-10-01 03:44:01.358334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.947 [2024-10-01 03:44:01.358380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.947 [2024-10-01 03:44:01.358390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:08.947 [2024-10-01 03:44:01.358401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.947 [2024-10-01 03:44:01.358423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.947 [2024-10-01 03:44:01.358523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.947 [2024-10-01 03:44:01.358534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:08.947 [2024-10-01 03:44:01.358547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.947 [2024-10-01 03:44:01.358554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.947 [2024-10-01 03:44:01.358586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.947 [2024-10-01 03:44:01.358595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:08.947 [2024-10-01 03:44:01.358605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.947 [2024-10-01 03:44:01.358613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.947 [2024-10-01 03:44:01.358654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.947 [2024-10-01 03:44:01.358670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:08.947 [2024-10-01 03:44:01.358680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.947 [2024-10-01 03:44:01.358688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.947 [2024-10-01 03:44:01.358735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:08.947 [2024-10-01 03:44:01.358745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:08.947 [2024-10-01 03:44:01.358756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:08.947 [2024-10-01 03:44:01.358765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:08.947 [2024-10-01 03:44:01.358902] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 481.145 ms, result 0 00:17:08.947 true 00:17:08.947 03:44:01 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 73758 00:17:08.947 03:44:01 ftl.ftl_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 73758 ']' 00:17:08.947 03:44:01 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # kill -0 73758 00:17:08.947 03:44:01 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # uname 00:17:08.947 03:44:01 ftl.ftl_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:08.947 03:44:01 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73758 00:17:08.947 killing process with pid 73758 00:17:08.947 Received shutdown signal, test time was about 4.000000 seconds 00:17:08.947 00:17:08.947 Latency(us) 00:17:08.947 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.947 =================================================================================================================== 00:17:08.947 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:08.947 03:44:01 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:08.947 03:44:01 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:08.947 03:44:01 ftl.ftl_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73758' 00:17:08.947 03:44:01 ftl.ftl_bdevperf -- common/autotest_common.sh@969 -- # kill 73758 00:17:08.947 03:44:01 ftl.ftl_bdevperf -- common/autotest_common.sh@974 -- # wait 73758 00:17:13.132 03:44:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:17:13.132 03:44:05 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:17:13.132 Remove shared memory files 00:17:13.132 03:44:05 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:17:13.132 03:44:05 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:17:13.132 03:44:05 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:17:13.391 03:44:05 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:17:13.391 03:44:05 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:17:13.391 03:44:05 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:17:13.391 00:17:13.391 real 0m24.118s 00:17:13.391 user 0m26.774s 00:17:13.391 sys 0m0.900s 00:17:13.391 03:44:05 ftl.ftl_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:13.391 03:44:05 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:17:13.391 ************************************ 00:17:13.391 END TEST ftl_bdevperf 00:17:13.391 ************************************ 00:17:13.391 03:44:05 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:13.391 03:44:05 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:13.391 03:44:05 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:13.391 03:44:05 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:13.391 ************************************ 00:17:13.391 START TEST ftl_trim 00:17:13.391 ************************************ 00:17:13.391 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:17:13.391 * Looking for test storage... 00:17:13.391 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:13.391 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:17:13.391 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:17:13.391 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lcov --version 00:17:13.391 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:13.391 03:44:05 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:17:13.391 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:13.391 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:17:13.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.391 --rc genhtml_branch_coverage=1 00:17:13.391 --rc genhtml_function_coverage=1 00:17:13.391 --rc genhtml_legend=1 00:17:13.391 --rc geninfo_all_blocks=1 00:17:13.391 --rc geninfo_unexecuted_blocks=1 00:17:13.391 00:17:13.391 ' 00:17:13.391 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:17:13.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.391 --rc genhtml_branch_coverage=1 00:17:13.391 --rc genhtml_function_coverage=1 00:17:13.391 --rc genhtml_legend=1 00:17:13.391 --rc geninfo_all_blocks=1 00:17:13.391 --rc geninfo_unexecuted_blocks=1 00:17:13.391 00:17:13.391 ' 00:17:13.391 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:17:13.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.391 --rc genhtml_branch_coverage=1 00:17:13.391 --rc genhtml_function_coverage=1 00:17:13.391 --rc genhtml_legend=1 00:17:13.391 --rc geninfo_all_blocks=1 00:17:13.391 --rc geninfo_unexecuted_blocks=1 00:17:13.391 00:17:13.391 ' 00:17:13.391 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:17:13.391 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:13.391 --rc genhtml_branch_coverage=1 00:17:13.391 --rc genhtml_function_coverage=1 00:17:13.391 --rc genhtml_legend=1 00:17:13.391 --rc geninfo_all_blocks=1 00:17:13.391 --rc geninfo_unexecuted_blocks=1 00:17:13.391 00:17:13.391 ' 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:13.391 03:44:05 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=74095 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 74095 00:17:13.392 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74095 ']' 00:17:13.392 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.392 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.392 03:44:05 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:17:13.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.392 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.392 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.392 03:44:05 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:13.651 [2024-10-01 03:44:05.977411] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:13.651 [2024-10-01 03:44:05.977696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74095 ] 00:17:13.651 [2024-10-01 03:44:06.125800] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:13.909 [2024-10-01 03:44:06.308190] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.909 [2024-10-01 03:44:06.308590] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 2 00:17:13.909 [2024-10-01 03:44:06.308743] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:14.475 03:44:06 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:14.475 03:44:06 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:14.475 03:44:06 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:14.475 03:44:06 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:17:14.475 03:44:06 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:14.475 03:44:06 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:17:14.475 03:44:06 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:17:14.475 03:44:06 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:14.734 03:44:07 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:14.734 03:44:07 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:17:14.734 03:44:07 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:14.734 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:17:14.734 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:14.734 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:14.734 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:14.734 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:14.993 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:14.993 { 00:17:14.993 "name": "nvme0n1", 00:17:14.993 "aliases": [ 00:17:14.993 "68e1fc6a-93c2-48b0-809e-5e258efabff7" 00:17:14.993 ], 00:17:14.993 "product_name": "NVMe disk", 00:17:14.993 "block_size": 4096, 00:17:14.993 "num_blocks": 1310720, 00:17:14.993 "uuid": "68e1fc6a-93c2-48b0-809e-5e258efabff7", 00:17:14.993 "numa_id": -1, 00:17:14.993 "assigned_rate_limits": { 00:17:14.993 "rw_ios_per_sec": 0, 00:17:14.993 "rw_mbytes_per_sec": 0, 00:17:14.993 "r_mbytes_per_sec": 0, 00:17:14.993 "w_mbytes_per_sec": 0 00:17:14.993 }, 00:17:14.993 "claimed": true, 00:17:14.993 "claim_type": "read_many_write_one", 00:17:14.993 "zoned": false, 00:17:14.993 "supported_io_types": { 00:17:14.993 "read": true, 00:17:14.993 "write": true, 00:17:14.993 "unmap": true, 00:17:14.993 "flush": true, 00:17:14.993 "reset": true, 00:17:14.993 "nvme_admin": true, 00:17:14.993 "nvme_io": true, 00:17:14.993 "nvme_io_md": false, 00:17:14.993 "write_zeroes": true, 00:17:14.993 "zcopy": false, 00:17:14.993 "get_zone_info": false, 00:17:14.993 "zone_management": false, 00:17:14.993 "zone_append": false, 00:17:14.993 "compare": true, 00:17:14.993 "compare_and_write": false, 00:17:14.993 "abort": true, 00:17:14.993 "seek_hole": false, 00:17:14.993 "seek_data": false, 00:17:14.993 "copy": true, 00:17:14.993 "nvme_iov_md": false 00:17:14.993 }, 00:17:14.993 "driver_specific": { 00:17:14.993 "nvme": [ 00:17:14.993 { 00:17:14.993 "pci_address": "0000:00:11.0", 00:17:14.993 "trid": { 00:17:14.993 "trtype": "PCIe", 00:17:14.993 "traddr": "0000:00:11.0" 00:17:14.993 }, 00:17:14.993 "ctrlr_data": { 00:17:14.993 "cntlid": 0, 00:17:14.993 "vendor_id": "0x1b36", 00:17:14.993 "model_number": "QEMU NVMe Ctrl", 00:17:14.993 "serial_number": "12341", 00:17:14.993 "firmware_revision": "8.0.0", 00:17:14.993 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:14.993 "oacs": { 00:17:14.993 "security": 0, 00:17:14.993 "format": 1, 00:17:14.993 "firmware": 0, 00:17:14.993 "ns_manage": 1 00:17:14.993 }, 00:17:14.993 "multi_ctrlr": false, 00:17:14.993 "ana_reporting": false 00:17:14.993 }, 00:17:14.993 "vs": { 00:17:14.993 "nvme_version": "1.4" 00:17:14.993 }, 00:17:14.993 "ns_data": { 00:17:14.993 "id": 1, 00:17:14.993 "can_share": false 00:17:14.993 } 00:17:14.993 } 00:17:14.993 ], 00:17:14.993 "mp_policy": "active_passive" 00:17:14.993 } 00:17:14.993 } 00:17:14.993 ]' 00:17:14.993 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:14.993 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:14.993 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:14.993 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=1310720 00:17:14.993 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:17:14.993 03:44:07 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 5120 00:17:14.993 03:44:07 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:17:14.993 03:44:07 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:14.993 03:44:07 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:17:14.993 03:44:07 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:14.993 03:44:07 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:15.252 03:44:07 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=572535e1-2114-4d96-b700-2c17be5fcb48 00:17:15.252 03:44:07 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:17:15.252 03:44:07 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 572535e1-2114-4d96-b700-2c17be5fcb48 00:17:15.510 03:44:07 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:15.769 03:44:08 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=5a6b6fcb-b1ea-4dd1-a9aa-323ef715e8aa 00:17:15.769 03:44:08 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5a6b6fcb-b1ea-4dd1-a9aa-323ef715e8aa 00:17:16.027 03:44:08 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:16.027 03:44:08 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:16.027 03:44:08 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:17:16.027 03:44:08 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:16.027 03:44:08 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:16.027 03:44:08 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:17:16.027 03:44:08 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:16.027 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:16.027 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:16.027 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:16.027 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:16.027 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:16.027 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:16.027 { 00:17:16.027 "name": "cd6fab18-707e-4cff-89e8-9f85b8f136a1", 00:17:16.027 "aliases": [ 00:17:16.027 "lvs/nvme0n1p0" 00:17:16.027 ], 00:17:16.027 "product_name": "Logical Volume", 00:17:16.027 "block_size": 4096, 00:17:16.027 "num_blocks": 26476544, 00:17:16.027 "uuid": "cd6fab18-707e-4cff-89e8-9f85b8f136a1", 00:17:16.027 "assigned_rate_limits": { 00:17:16.027 "rw_ios_per_sec": 0, 00:17:16.027 "rw_mbytes_per_sec": 0, 00:17:16.027 "r_mbytes_per_sec": 0, 00:17:16.027 "w_mbytes_per_sec": 0 00:17:16.027 }, 00:17:16.027 "claimed": false, 00:17:16.027 "zoned": false, 00:17:16.027 "supported_io_types": { 00:17:16.027 "read": true, 00:17:16.027 "write": true, 00:17:16.027 "unmap": true, 00:17:16.027 "flush": false, 00:17:16.027 "reset": true, 00:17:16.027 "nvme_admin": false, 00:17:16.027 "nvme_io": false, 00:17:16.027 "nvme_io_md": false, 00:17:16.027 "write_zeroes": true, 00:17:16.027 "zcopy": false, 00:17:16.027 "get_zone_info": false, 00:17:16.027 "zone_management": false, 00:17:16.027 "zone_append": false, 00:17:16.027 "compare": false, 00:17:16.027 "compare_and_write": false, 00:17:16.027 "abort": false, 00:17:16.027 "seek_hole": true, 00:17:16.027 "seek_data": true, 00:17:16.027 "copy": false, 00:17:16.027 "nvme_iov_md": false 00:17:16.027 }, 00:17:16.027 "driver_specific": { 00:17:16.027 "lvol": { 00:17:16.027 "lvol_store_uuid": "5a6b6fcb-b1ea-4dd1-a9aa-323ef715e8aa", 00:17:16.027 "base_bdev": "nvme0n1", 00:17:16.027 "thin_provision": true, 00:17:16.027 "num_allocated_clusters": 0, 00:17:16.027 "snapshot": false, 00:17:16.027 "clone": false, 00:17:16.027 "esnap_clone": false 00:17:16.027 } 00:17:16.027 } 00:17:16.027 } 00:17:16.027 ]' 00:17:16.027 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:16.286 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:16.286 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:16.286 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:16.286 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:16.286 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:16.286 03:44:08 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:17:16.286 03:44:08 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:17:16.286 03:44:08 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:16.545 03:44:08 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:16.545 03:44:08 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:16.545 03:44:08 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:16.545 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:16.545 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:16.545 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:16.545 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:16.545 03:44:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:16.545 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:16.545 { 00:17:16.545 "name": "cd6fab18-707e-4cff-89e8-9f85b8f136a1", 00:17:16.545 "aliases": [ 00:17:16.545 "lvs/nvme0n1p0" 00:17:16.545 ], 00:17:16.545 "product_name": "Logical Volume", 00:17:16.545 "block_size": 4096, 00:17:16.545 "num_blocks": 26476544, 00:17:16.545 "uuid": "cd6fab18-707e-4cff-89e8-9f85b8f136a1", 00:17:16.545 "assigned_rate_limits": { 00:17:16.545 "rw_ios_per_sec": 0, 00:17:16.545 "rw_mbytes_per_sec": 0, 00:17:16.545 "r_mbytes_per_sec": 0, 00:17:16.545 "w_mbytes_per_sec": 0 00:17:16.545 }, 00:17:16.545 "claimed": false, 00:17:16.545 "zoned": false, 00:17:16.545 "supported_io_types": { 00:17:16.545 "read": true, 00:17:16.545 "write": true, 00:17:16.545 "unmap": true, 00:17:16.545 "flush": false, 00:17:16.545 "reset": true, 00:17:16.545 "nvme_admin": false, 00:17:16.545 "nvme_io": false, 00:17:16.545 "nvme_io_md": false, 00:17:16.545 "write_zeroes": true, 00:17:16.545 "zcopy": false, 00:17:16.545 "get_zone_info": false, 00:17:16.545 "zone_management": false, 00:17:16.545 "zone_append": false, 00:17:16.545 "compare": false, 00:17:16.545 "compare_and_write": false, 00:17:16.545 "abort": false, 00:17:16.545 "seek_hole": true, 00:17:16.545 "seek_data": true, 00:17:16.545 "copy": false, 00:17:16.545 "nvme_iov_md": false 00:17:16.545 }, 00:17:16.545 "driver_specific": { 00:17:16.545 "lvol": { 00:17:16.545 "lvol_store_uuid": "5a6b6fcb-b1ea-4dd1-a9aa-323ef715e8aa", 00:17:16.545 "base_bdev": "nvme0n1", 00:17:16.545 "thin_provision": true, 00:17:16.545 "num_allocated_clusters": 0, 00:17:16.545 "snapshot": false, 00:17:16.545 "clone": false, 00:17:16.545 "esnap_clone": false 00:17:16.545 } 00:17:16.545 } 00:17:16.545 } 00:17:16.545 ]' 00:17:16.545 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:16.803 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:16.803 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:16.803 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:16.803 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:16.803 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:16.803 03:44:09 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:17:16.803 03:44:09 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:16.803 03:44:09 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:17:16.803 03:44:09 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:17:16.803 03:44:09 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:16.803 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1378 -- # local bdev_name=cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:16.803 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1379 -- # local bdev_info 00:17:16.803 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bs 00:17:16.803 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local nb 00:17:16.803 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cd6fab18-707e-4cff-89e8-9f85b8f136a1 00:17:17.061 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:17:17.061 { 00:17:17.061 "name": "cd6fab18-707e-4cff-89e8-9f85b8f136a1", 00:17:17.061 "aliases": [ 00:17:17.061 "lvs/nvme0n1p0" 00:17:17.061 ], 00:17:17.061 "product_name": "Logical Volume", 00:17:17.061 "block_size": 4096, 00:17:17.061 "num_blocks": 26476544, 00:17:17.061 "uuid": "cd6fab18-707e-4cff-89e8-9f85b8f136a1", 00:17:17.061 "assigned_rate_limits": { 00:17:17.061 "rw_ios_per_sec": 0, 00:17:17.061 "rw_mbytes_per_sec": 0, 00:17:17.061 "r_mbytes_per_sec": 0, 00:17:17.061 "w_mbytes_per_sec": 0 00:17:17.061 }, 00:17:17.061 "claimed": false, 00:17:17.061 "zoned": false, 00:17:17.061 "supported_io_types": { 00:17:17.061 "read": true, 00:17:17.061 "write": true, 00:17:17.061 "unmap": true, 00:17:17.061 "flush": false, 00:17:17.061 "reset": true, 00:17:17.061 "nvme_admin": false, 00:17:17.061 "nvme_io": false, 00:17:17.061 "nvme_io_md": false, 00:17:17.061 "write_zeroes": true, 00:17:17.061 "zcopy": false, 00:17:17.061 "get_zone_info": false, 00:17:17.061 "zone_management": false, 00:17:17.061 "zone_append": false, 00:17:17.061 "compare": false, 00:17:17.061 "compare_and_write": false, 00:17:17.061 "abort": false, 00:17:17.061 "seek_hole": true, 00:17:17.061 "seek_data": true, 00:17:17.061 "copy": false, 00:17:17.061 "nvme_iov_md": false 00:17:17.061 }, 00:17:17.061 "driver_specific": { 00:17:17.061 "lvol": { 00:17:17.061 "lvol_store_uuid": "5a6b6fcb-b1ea-4dd1-a9aa-323ef715e8aa", 00:17:17.061 "base_bdev": "nvme0n1", 00:17:17.061 "thin_provision": true, 00:17:17.061 "num_allocated_clusters": 0, 00:17:17.061 "snapshot": false, 00:17:17.061 "clone": false, 00:17:17.061 "esnap_clone": false 00:17:17.061 } 00:17:17.061 } 00:17:17.061 } 00:17:17.061 ]' 00:17:17.061 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:17:17.061 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # bs=4096 00:17:17.061 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:17:17.321 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # nb=26476544 00:17:17.321 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:17:17.321 03:44:09 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # echo 103424 00:17:17.321 03:44:09 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:17:17.321 03:44:09 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cd6fab18-707e-4cff-89e8-9f85b8f136a1 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:17:17.321 [2024-10-01 03:44:09.827849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.321 [2024-10-01 03:44:09.827909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:17.321 [2024-10-01 03:44:09.827924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:17.321 [2024-10-01 03:44:09.827933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.321 [2024-10-01 03:44:09.830409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.321 [2024-10-01 03:44:09.830451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:17.321 [2024-10-01 03:44:09.830461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.450 ms 00:17:17.321 [2024-10-01 03:44:09.830468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.321 [2024-10-01 03:44:09.830581] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:17.321 [2024-10-01 03:44:09.831159] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:17.321 [2024-10-01 03:44:09.831186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.321 [2024-10-01 03:44:09.831193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:17.321 [2024-10-01 03:44:09.831202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.614 ms 00:17:17.321 [2024-10-01 03:44:09.831211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.321 [2024-10-01 03:44:09.831294] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 53a1b279-b409-4899-8013-3cb089931240 00:17:17.321 [2024-10-01 03:44:09.832606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.321 [2024-10-01 03:44:09.832770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:17.321 [2024-10-01 03:44:09.832784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:17:17.321 [2024-10-01 03:44:09.832794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.321 [2024-10-01 03:44:09.839924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.321 [2024-10-01 03:44:09.840066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:17.321 [2024-10-01 03:44:09.840079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.046 ms 00:17:17.321 [2024-10-01 03:44:09.840087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.321 [2024-10-01 03:44:09.840202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.321 [2024-10-01 03:44:09.840213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:17.321 [2024-10-01 03:44:09.840220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:17:17.321 [2024-10-01 03:44:09.840233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.321 [2024-10-01 03:44:09.840264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.321 [2024-10-01 03:44:09.840272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:17.321 [2024-10-01 03:44:09.840279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:17.321 [2024-10-01 03:44:09.840286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.321 [2024-10-01 03:44:09.840318] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:17.321 [2024-10-01 03:44:09.843625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.321 [2024-10-01 03:44:09.843728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:17.321 [2024-10-01 03:44:09.843744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.311 ms 00:17:17.321 [2024-10-01 03:44:09.843751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.321 [2024-10-01 03:44:09.843799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.321 [2024-10-01 03:44:09.843806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:17.321 [2024-10-01 03:44:09.843815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:17.321 [2024-10-01 03:44:09.843823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.321 [2024-10-01 03:44:09.843867] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:17.321 [2024-10-01 03:44:09.843980] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:17.321 [2024-10-01 03:44:09.843994] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:17.321 [2024-10-01 03:44:09.844030] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:17.321 [2024-10-01 03:44:09.844043] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:17.321 [2024-10-01 03:44:09.844051] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:17.321 [2024-10-01 03:44:09.844059] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:17.321 [2024-10-01 03:44:09.844066] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:17.321 [2024-10-01 03:44:09.844073] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:17.321 [2024-10-01 03:44:09.844079] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:17.321 [2024-10-01 03:44:09.844087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.321 [2024-10-01 03:44:09.844093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:17.321 [2024-10-01 03:44:09.844101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.240 ms 00:17:17.321 [2024-10-01 03:44:09.844107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.321 [2024-10-01 03:44:09.844193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.321 [2024-10-01 03:44:09.844202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:17.321 [2024-10-01 03:44:09.844210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:17:17.321 [2024-10-01 03:44:09.844216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.321 [2024-10-01 03:44:09.844317] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:17.321 [2024-10-01 03:44:09.844325] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:17.321 [2024-10-01 03:44:09.844333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.321 [2024-10-01 03:44:09.844339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.321 [2024-10-01 03:44:09.844347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:17.321 [2024-10-01 03:44:09.844353] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:17.321 [2024-10-01 03:44:09.844360] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:17.321 [2024-10-01 03:44:09.844365] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:17.321 [2024-10-01 03:44:09.844372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:17.321 [2024-10-01 03:44:09.844377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.321 [2024-10-01 03:44:09.844384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:17.321 [2024-10-01 03:44:09.844389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:17.322 [2024-10-01 03:44:09.844397] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:17.322 [2024-10-01 03:44:09.844403] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:17.322 [2024-10-01 03:44:09.844410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:17.322 [2024-10-01 03:44:09.844415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.322 [2024-10-01 03:44:09.844424] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:17.322 [2024-10-01 03:44:09.844430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:17.322 [2024-10-01 03:44:09.844437] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.322 [2024-10-01 03:44:09.844442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:17.322 [2024-10-01 03:44:09.844449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:17.322 [2024-10-01 03:44:09.844454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.322 [2024-10-01 03:44:09.844461] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:17.322 [2024-10-01 03:44:09.844467] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:17.322 [2024-10-01 03:44:09.844473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.322 [2024-10-01 03:44:09.844480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:17.322 [2024-10-01 03:44:09.844486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:17.322 [2024-10-01 03:44:09.844491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.322 [2024-10-01 03:44:09.844497] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:17.322 [2024-10-01 03:44:09.844502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:17.322 [2024-10-01 03:44:09.844509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:17.322 [2024-10-01 03:44:09.844515] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:17.322 [2024-10-01 03:44:09.844524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:17.322 [2024-10-01 03:44:09.844529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.322 [2024-10-01 03:44:09.844535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:17.322 [2024-10-01 03:44:09.844540] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:17.322 [2024-10-01 03:44:09.844546] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:17.322 [2024-10-01 03:44:09.844551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:17.322 [2024-10-01 03:44:09.844558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:17.322 [2024-10-01 03:44:09.844564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.322 [2024-10-01 03:44:09.844570] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:17.322 [2024-10-01 03:44:09.844575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:17.322 [2024-10-01 03:44:09.844581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.322 [2024-10-01 03:44:09.844586] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:17.322 [2024-10-01 03:44:09.844594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:17.322 [2024-10-01 03:44:09.844601] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:17.322 [2024-10-01 03:44:09.844608] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:17.322 [2024-10-01 03:44:09.844615] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:17.322 [2024-10-01 03:44:09.844624] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:17.322 [2024-10-01 03:44:09.844629] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:17.322 [2024-10-01 03:44:09.844637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:17.322 [2024-10-01 03:44:09.844642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:17.322 [2024-10-01 03:44:09.844648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:17.322 [2024-10-01 03:44:09.844657] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:17.322 [2024-10-01 03:44:09.844665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.322 [2024-10-01 03:44:09.844674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:17.322 [2024-10-01 03:44:09.844682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:17.322 [2024-10-01 03:44:09.844688] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:17.322 [2024-10-01 03:44:09.844695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:17.322 [2024-10-01 03:44:09.844701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:17.322 [2024-10-01 03:44:09.844708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:17.322 [2024-10-01 03:44:09.844714] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:17.322 [2024-10-01 03:44:09.844721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:17.322 [2024-10-01 03:44:09.844726] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:17.322 [2024-10-01 03:44:09.844734] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:17.322 [2024-10-01 03:44:09.844740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:17.322 [2024-10-01 03:44:09.844747] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:17.322 [2024-10-01 03:44:09.844752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:17.322 [2024-10-01 03:44:09.844759] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:17.322 [2024-10-01 03:44:09.844765] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:17.322 [2024-10-01 03:44:09.844774] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:17.322 [2024-10-01 03:44:09.844780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:17.322 [2024-10-01 03:44:09.844788] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:17.322 [2024-10-01 03:44:09.844794] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:17.322 [2024-10-01 03:44:09.844801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:17.322 [2024-10-01 03:44:09.844807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.322 [2024-10-01 03:44:09.844814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:17.322 [2024-10-01 03:44:09.844820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:17:17.322 [2024-10-01 03:44:09.844827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.322 [2024-10-01 03:44:09.844896] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:17.322 [2024-10-01 03:44:09.844910] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:19.890 [2024-10-01 03:44:12.188460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.890 [2024-10-01 03:44:12.188746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:19.890 [2024-10-01 03:44:12.188789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2343.551 ms 00:17:19.890 [2024-10-01 03:44:12.188815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.890 [2024-10-01 03:44:12.228482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.890 [2024-10-01 03:44:12.228774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:19.890 [2024-10-01 03:44:12.228918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.366 ms 00:17:19.890 [2024-10-01 03:44:12.228961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.890 [2024-10-01 03:44:12.229356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.890 [2024-10-01 03:44:12.229487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:19.890 [2024-10-01 03:44:12.229568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:17:19.890 [2024-10-01 03:44:12.229610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.890 [2024-10-01 03:44:12.264502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.890 [2024-10-01 03:44:12.264713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:19.890 [2024-10-01 03:44:12.264771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.783 ms 00:17:19.890 [2024-10-01 03:44:12.264798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.890 [2024-10-01 03:44:12.264917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.891 [2024-10-01 03:44:12.264947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:19.891 [2024-10-01 03:44:12.264969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:19.891 [2024-10-01 03:44:12.264990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.891 [2024-10-01 03:44:12.265890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.891 [2024-10-01 03:44:12.266030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:19.891 [2024-10-01 03:44:12.266096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.406 ms 00:17:19.891 [2024-10-01 03:44:12.266123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.891 [2024-10-01 03:44:12.266285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.891 [2024-10-01 03:44:12.266311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:19.891 [2024-10-01 03:44:12.266366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:17:19.891 [2024-10-01 03:44:12.266394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.891 [2024-10-01 03:44:12.282388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.891 [2024-10-01 03:44:12.282529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:19.891 [2024-10-01 03:44:12.282585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.938 ms 00:17:19.891 [2024-10-01 03:44:12.282610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.891 [2024-10-01 03:44:12.294846] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:19.891 [2024-10-01 03:44:12.312441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.891 [2024-10-01 03:44:12.312633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:19.891 [2024-10-01 03:44:12.312688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.643 ms 00:17:19.891 [2024-10-01 03:44:12.312713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.891 [2024-10-01 03:44:12.381032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.891 [2024-10-01 03:44:12.381289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:19.891 [2024-10-01 03:44:12.381845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.189 ms 00:17:19.891 [2024-10-01 03:44:12.382163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.891 [2024-10-01 03:44:12.382883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.891 [2024-10-01 03:44:12.382939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:19.891 [2024-10-01 03:44:12.382987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:17:19.891 [2024-10-01 03:44:12.383047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.891 [2024-10-01 03:44:12.413771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.891 [2024-10-01 03:44:12.413822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:19.891 [2024-10-01 03:44:12.413837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.631 ms 00:17:19.891 [2024-10-01 03:44:12.413845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.891 [2024-10-01 03:44:12.436692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.891 [2024-10-01 03:44:12.436878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:19.891 [2024-10-01 03:44:12.436900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.781 ms 00:17:19.891 [2024-10-01 03:44:12.436908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:19.891 [2024-10-01 03:44:12.437546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:19.891 [2024-10-01 03:44:12.437565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:19.891 [2024-10-01 03:44:12.437577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:17:19.891 [2024-10-01 03:44:12.437584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.149 [2024-10-01 03:44:12.508968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.149 [2024-10-01 03:44:12.509039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:20.149 [2024-10-01 03:44:12.509059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 71.336 ms 00:17:20.149 [2024-10-01 03:44:12.509068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.149 [2024-10-01 03:44:12.533946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.149 [2024-10-01 03:44:12.533998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:20.149 [2024-10-01 03:44:12.534028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.770 ms 00:17:20.149 [2024-10-01 03:44:12.534036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.149 [2024-10-01 03:44:12.557594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.149 [2024-10-01 03:44:12.557640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:20.149 [2024-10-01 03:44:12.557654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.490 ms 00:17:20.149 [2024-10-01 03:44:12.557661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.150 [2024-10-01 03:44:12.581299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.150 [2024-10-01 03:44:12.581504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:20.150 [2024-10-01 03:44:12.581526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.552 ms 00:17:20.150 [2024-10-01 03:44:12.581534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.150 [2024-10-01 03:44:12.581604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.150 [2024-10-01 03:44:12.581614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:20.150 [2024-10-01 03:44:12.581628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:20.150 [2024-10-01 03:44:12.581652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.150 [2024-10-01 03:44:12.581741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:20.150 [2024-10-01 03:44:12.581750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:20.150 [2024-10-01 03:44:12.581763] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:17:20.150 [2024-10-01 03:44:12.581772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:20.150 [2024-10-01 03:44:12.583069] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:20.150 [2024-10-01 03:44:12.586173] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2754.859 ms, result 0 00:17:20.150 [2024-10-01 03:44:12.586920] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:20.150 { 00:17:20.150 "name": "ftl0", 00:17:20.150 "uuid": "53a1b279-b409-4899-8013-3cb089931240" 00:17:20.150 } 00:17:20.150 03:44:12 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:17:20.150 03:44:12 ftl.ftl_trim -- common/autotest_common.sh@899 -- # local bdev_name=ftl0 00:17:20.150 03:44:12 ftl.ftl_trim -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:20.150 03:44:12 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local i 00:17:20.150 03:44:12 ftl.ftl_trim -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:20.150 03:44:12 ftl.ftl_trim -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:20.150 03:44:12 ftl.ftl_trim -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:20.407 03:44:12 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:20.665 [ 00:17:20.665 { 00:17:20.665 "name": "ftl0", 00:17:20.665 "aliases": [ 00:17:20.665 "53a1b279-b409-4899-8013-3cb089931240" 00:17:20.665 ], 00:17:20.665 "product_name": "FTL disk", 00:17:20.665 "block_size": 4096, 00:17:20.665 "num_blocks": 23592960, 00:17:20.665 "uuid": "53a1b279-b409-4899-8013-3cb089931240", 00:17:20.665 "assigned_rate_limits": { 00:17:20.665 "rw_ios_per_sec": 0, 00:17:20.665 "rw_mbytes_per_sec": 0, 00:17:20.665 "r_mbytes_per_sec": 0, 00:17:20.665 "w_mbytes_per_sec": 0 00:17:20.665 }, 00:17:20.665 "claimed": false, 00:17:20.665 "zoned": false, 00:17:20.665 "supported_io_types": { 00:17:20.665 "read": true, 00:17:20.665 "write": true, 00:17:20.665 "unmap": true, 00:17:20.665 "flush": true, 00:17:20.665 "reset": false, 00:17:20.665 "nvme_admin": false, 00:17:20.665 "nvme_io": false, 00:17:20.665 "nvme_io_md": false, 00:17:20.665 "write_zeroes": true, 00:17:20.665 "zcopy": false, 00:17:20.665 "get_zone_info": false, 00:17:20.665 "zone_management": false, 00:17:20.665 "zone_append": false, 00:17:20.665 "compare": false, 00:17:20.665 "compare_and_write": false, 00:17:20.665 "abort": false, 00:17:20.665 "seek_hole": false, 00:17:20.665 "seek_data": false, 00:17:20.665 "copy": false, 00:17:20.665 "nvme_iov_md": false 00:17:20.665 }, 00:17:20.665 "driver_specific": { 00:17:20.665 "ftl": { 00:17:20.665 "base_bdev": "cd6fab18-707e-4cff-89e8-9f85b8f136a1", 00:17:20.665 "cache": "nvc0n1p0" 00:17:20.665 } 00:17:20.665 } 00:17:20.665 } 00:17:20.665 ] 00:17:20.665 03:44:13 ftl.ftl_trim -- common/autotest_common.sh@907 -- # return 0 00:17:20.665 03:44:13 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:17:20.665 03:44:13 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:20.665 03:44:13 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:17:20.665 03:44:13 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:17:20.923 03:44:13 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:17:20.923 { 00:17:20.923 "name": "ftl0", 00:17:20.923 "aliases": [ 00:17:20.923 "53a1b279-b409-4899-8013-3cb089931240" 00:17:20.923 ], 00:17:20.923 "product_name": "FTL disk", 00:17:20.923 "block_size": 4096, 00:17:20.923 "num_blocks": 23592960, 00:17:20.923 "uuid": "53a1b279-b409-4899-8013-3cb089931240", 00:17:20.923 "assigned_rate_limits": { 00:17:20.923 "rw_ios_per_sec": 0, 00:17:20.923 "rw_mbytes_per_sec": 0, 00:17:20.923 "r_mbytes_per_sec": 0, 00:17:20.923 "w_mbytes_per_sec": 0 00:17:20.923 }, 00:17:20.923 "claimed": false, 00:17:20.923 "zoned": false, 00:17:20.923 "supported_io_types": { 00:17:20.923 "read": true, 00:17:20.923 "write": true, 00:17:20.923 "unmap": true, 00:17:20.923 "flush": true, 00:17:20.923 "reset": false, 00:17:20.923 "nvme_admin": false, 00:17:20.923 "nvme_io": false, 00:17:20.923 "nvme_io_md": false, 00:17:20.923 "write_zeroes": true, 00:17:20.924 "zcopy": false, 00:17:20.924 "get_zone_info": false, 00:17:20.924 "zone_management": false, 00:17:20.924 "zone_append": false, 00:17:20.924 "compare": false, 00:17:20.924 "compare_and_write": false, 00:17:20.924 "abort": false, 00:17:20.924 "seek_hole": false, 00:17:20.924 "seek_data": false, 00:17:20.924 "copy": false, 00:17:20.924 "nvme_iov_md": false 00:17:20.924 }, 00:17:20.924 "driver_specific": { 00:17:20.924 "ftl": { 00:17:20.924 "base_bdev": "cd6fab18-707e-4cff-89e8-9f85b8f136a1", 00:17:20.924 "cache": "nvc0n1p0" 00:17:20.924 } 00:17:20.924 } 00:17:20.924 } 00:17:20.924 ]' 00:17:20.924 03:44:13 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:17:20.924 03:44:13 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:17:20.924 03:44:13 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:21.182 [2024-10-01 03:44:13.618981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.182 [2024-10-01 03:44:13.619054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:21.182 [2024-10-01 03:44:13.619067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:21.182 [2024-10-01 03:44:13.619076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.182 [2024-10-01 03:44:13.619106] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:21.182 [2024-10-01 03:44:13.621287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.182 [2024-10-01 03:44:13.621315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:21.182 [2024-10-01 03:44:13.621329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.165 ms 00:17:21.182 [2024-10-01 03:44:13.621337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.182 [2024-10-01 03:44:13.621836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.182 [2024-10-01 03:44:13.621860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:21.182 [2024-10-01 03:44:13.621869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.468 ms 00:17:21.183 [2024-10-01 03:44:13.621875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.183 [2024-10-01 03:44:13.624660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.183 [2024-10-01 03:44:13.624833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:21.183 [2024-10-01 03:44:13.624847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.762 ms 00:17:21.183 [2024-10-01 03:44:13.624854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.183 [2024-10-01 03:44:13.630187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.183 [2024-10-01 03:44:13.630212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:21.183 [2024-10-01 03:44:13.630224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.290 ms 00:17:21.183 [2024-10-01 03:44:13.630231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.183 [2024-10-01 03:44:13.649586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.183 [2024-10-01 03:44:13.649625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:21.183 [2024-10-01 03:44:13.649640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.282 ms 00:17:21.183 [2024-10-01 03:44:13.649647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.183 [2024-10-01 03:44:13.662814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.183 [2024-10-01 03:44:13.662853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:21.183 [2024-10-01 03:44:13.662865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.105 ms 00:17:21.183 [2024-10-01 03:44:13.662872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.183 [2024-10-01 03:44:13.663079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.183 [2024-10-01 03:44:13.663090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:21.183 [2024-10-01 03:44:13.663100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.150 ms 00:17:21.183 [2024-10-01 03:44:13.663107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.183 [2024-10-01 03:44:13.681275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.183 [2024-10-01 03:44:13.681311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:21.183 [2024-10-01 03:44:13.681323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.122 ms 00:17:21.183 [2024-10-01 03:44:13.681329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.183 [2024-10-01 03:44:13.698809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.183 [2024-10-01 03:44:13.698844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:21.183 [2024-10-01 03:44:13.698857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.414 ms 00:17:21.183 [2024-10-01 03:44:13.698863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.183 [2024-10-01 03:44:13.715974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.183 [2024-10-01 03:44:13.716029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:21.183 [2024-10-01 03:44:13.716041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.045 ms 00:17:21.183 [2024-10-01 03:44:13.716047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.442 [2024-10-01 03:44:13.733309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.442 [2024-10-01 03:44:13.733350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:21.442 [2024-10-01 03:44:13.733361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.138 ms 00:17:21.442 [2024-10-01 03:44:13.733368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.442 [2024-10-01 03:44:13.733426] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:21.442 [2024-10-01 03:44:13.733441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:21.442 [2024-10-01 03:44:13.733574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.733994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:21.443 [2024-10-01 03:44:13.734190] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:21.443 [2024-10-01 03:44:13.734199] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 53a1b279-b409-4899-8013-3cb089931240 00:17:21.443 [2024-10-01 03:44:13.734205] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:21.443 [2024-10-01 03:44:13.734212] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:21.443 [2024-10-01 03:44:13.734218] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:21.443 [2024-10-01 03:44:13.734227] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:21.443 [2024-10-01 03:44:13.734233] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:21.443 [2024-10-01 03:44:13.734251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:21.443 [2024-10-01 03:44:13.734257] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:21.443 [2024-10-01 03:44:13.734263] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:21.443 [2024-10-01 03:44:13.734268] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:21.443 [2024-10-01 03:44:13.734275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.444 [2024-10-01 03:44:13.734281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:21.444 [2024-10-01 03:44:13.734290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.851 ms 00:17:21.444 [2024-10-01 03:44:13.734299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.744531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.444 [2024-10-01 03:44:13.744567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:21.444 [2024-10-01 03:44:13.744582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.204 ms 00:17:21.444 [2024-10-01 03:44:13.744590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.744939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:21.444 [2024-10-01 03:44:13.744955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:21.444 [2024-10-01 03:44:13.744966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:17:21.444 [2024-10-01 03:44:13.744972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.780990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.781056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:21.444 [2024-10-01 03:44:13.781069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.781076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.781191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.781200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:21.444 [2024-10-01 03:44:13.781211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.781218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.781274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.781283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:21.444 [2024-10-01 03:44:13.781293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.781299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.781330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.781336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:21.444 [2024-10-01 03:44:13.781343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.781351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.847213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.847270] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:21.444 [2024-10-01 03:44:13.847283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.847290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.898240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.898297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:21.444 [2024-10-01 03:44:13.898310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.898319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.898414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.898429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:21.444 [2024-10-01 03:44:13.898440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.898447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.898492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.898500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:21.444 [2024-10-01 03:44:13.898521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.898528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.898632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.898641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:21.444 [2024-10-01 03:44:13.898649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.898656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.898703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.898711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:21.444 [2024-10-01 03:44:13.898720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.898726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.898782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.898790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:21.444 [2024-10-01 03:44:13.898800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.898806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.898866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:21.444 [2024-10-01 03:44:13.898877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:21.444 [2024-10-01 03:44:13.898885] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:21.444 [2024-10-01 03:44:13.898891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:21.444 [2024-10-01 03:44:13.899091] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 280.103 ms, result 0 00:17:21.444 true 00:17:21.444 03:44:13 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 74095 00:17:21.444 03:44:13 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74095 ']' 00:17:21.444 03:44:13 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74095 00:17:21.444 03:44:13 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:21.444 03:44:13 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:21.444 03:44:13 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74095 00:17:21.444 03:44:13 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:21.444 03:44:13 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:21.444 killing process with pid 74095 00:17:21.444 03:44:13 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74095' 00:17:21.444 03:44:13 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74095 00:17:21.444 03:44:13 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74095 00:17:28.003 03:44:19 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:17:28.003 65536+0 records in 00:17:28.003 65536+0 records out 00:17:28.003 268435456 bytes (268 MB, 256 MiB) copied, 0.807324 s, 333 MB/s 00:17:28.003 03:44:20 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:28.003 [2024-10-01 03:44:20.356263] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:28.003 [2024-10-01 03:44:20.356387] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74277 ] 00:17:28.003 [2024-10-01 03:44:20.502146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.261 [2024-10-01 03:44:20.686131] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:28.520 [2024-10-01 03:44:20.915042] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:28.520 [2024-10-01 03:44:20.915319] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:28.520 [2024-10-01 03:44:21.068897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.520 [2024-10-01 03:44:21.068957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:28.520 [2024-10-01 03:44:21.068972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:28.520 [2024-10-01 03:44:21.068980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.780 [2024-10-01 03:44:21.071257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.780 [2024-10-01 03:44:21.071288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:28.780 [2024-10-01 03:44:21.071297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:17:28.780 [2024-10-01 03:44:21.071305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.780 [2024-10-01 03:44:21.071370] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:28.780 [2024-10-01 03:44:21.071954] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:28.780 [2024-10-01 03:44:21.071972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.780 [2024-10-01 03:44:21.071981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:28.780 [2024-10-01 03:44:21.071989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:17:28.780 [2024-10-01 03:44:21.071995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.780 [2024-10-01 03:44:21.073342] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:28.780 [2024-10-01 03:44:21.083601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.780 [2024-10-01 03:44:21.083629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:28.780 [2024-10-01 03:44:21.083639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.260 ms 00:17:28.780 [2024-10-01 03:44:21.083646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.780 [2024-10-01 03:44:21.083729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.780 [2024-10-01 03:44:21.083738] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:28.780 [2024-10-01 03:44:21.083748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:28.780 [2024-10-01 03:44:21.083753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.780 [2024-10-01 03:44:21.090040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.780 [2024-10-01 03:44:21.090205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:28.780 [2024-10-01 03:44:21.090219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.252 ms 00:17:28.780 [2024-10-01 03:44:21.090225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.780 [2024-10-01 03:44:21.090311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.780 [2024-10-01 03:44:21.090323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:28.780 [2024-10-01 03:44:21.090330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:17:28.780 [2024-10-01 03:44:21.090337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.780 [2024-10-01 03:44:21.090364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.780 [2024-10-01 03:44:21.090371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:28.780 [2024-10-01 03:44:21.090378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:28.780 [2024-10-01 03:44:21.090385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.780 [2024-10-01 03:44:21.090404] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:28.780 [2024-10-01 03:44:21.093563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.780 [2024-10-01 03:44:21.093674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:28.780 [2024-10-01 03:44:21.093687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.166 ms 00:17:28.780 [2024-10-01 03:44:21.093694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.780 [2024-10-01 03:44:21.093739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.780 [2024-10-01 03:44:21.093751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:28.780 [2024-10-01 03:44:21.093758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:28.780 [2024-10-01 03:44:21.093764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.780 [2024-10-01 03:44:21.093780] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:28.780 [2024-10-01 03:44:21.093797] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:28.780 [2024-10-01 03:44:21.093827] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:28.780 [2024-10-01 03:44:21.093839] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:28.780 [2024-10-01 03:44:21.093926] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:28.780 [2024-10-01 03:44:21.093936] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:28.780 [2024-10-01 03:44:21.093944] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:28.780 [2024-10-01 03:44:21.093953] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:28.781 [2024-10-01 03:44:21.093962] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:28.781 [2024-10-01 03:44:21.093968] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:28.781 [2024-10-01 03:44:21.093974] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:28.781 [2024-10-01 03:44:21.093980] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:28.781 [2024-10-01 03:44:21.093985] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:28.781 [2024-10-01 03:44:21.093992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.781 [2024-10-01 03:44:21.094016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:28.781 [2024-10-01 03:44:21.094024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:17:28.781 [2024-10-01 03:44:21.094031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.781 [2024-10-01 03:44:21.094102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.781 [2024-10-01 03:44:21.094110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:28.781 [2024-10-01 03:44:21.094117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:28.781 [2024-10-01 03:44:21.094123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.781 [2024-10-01 03:44:21.094204] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:28.781 [2024-10-01 03:44:21.094213] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:28.781 [2024-10-01 03:44:21.094222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.781 [2024-10-01 03:44:21.094229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:28.781 [2024-10-01 03:44:21.094242] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:28.781 [2024-10-01 03:44:21.094253] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:28.781 [2024-10-01 03:44:21.094260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.781 [2024-10-01 03:44:21.094272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:28.781 [2024-10-01 03:44:21.094283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:28.781 [2024-10-01 03:44:21.094289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:28.781 [2024-10-01 03:44:21.094294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:28.781 [2024-10-01 03:44:21.094301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:28.781 [2024-10-01 03:44:21.094308] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094314] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:28.781 [2024-10-01 03:44:21.094319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:28.781 [2024-10-01 03:44:21.094324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:28.781 [2024-10-01 03:44:21.094335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.781 [2024-10-01 03:44:21.094346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:28.781 [2024-10-01 03:44:21.094351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094356] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.781 [2024-10-01 03:44:21.094361] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:28.781 [2024-10-01 03:44:21.094367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.781 [2024-10-01 03:44:21.094379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:28.781 [2024-10-01 03:44:21.094384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:28.781 [2024-10-01 03:44:21.094395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:28.781 [2024-10-01 03:44:21.094401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094406] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.781 [2024-10-01 03:44:21.094412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:28.781 [2024-10-01 03:44:21.094417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:28.781 [2024-10-01 03:44:21.094422] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:28.781 [2024-10-01 03:44:21.094427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:28.781 [2024-10-01 03:44:21.094446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:28.781 [2024-10-01 03:44:21.094451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:28.781 [2024-10-01 03:44:21.094463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:28.781 [2024-10-01 03:44:21.094469] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094475] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:28.781 [2024-10-01 03:44:21.094482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:28.781 [2024-10-01 03:44:21.094489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:28.781 [2024-10-01 03:44:21.094495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:28.781 [2024-10-01 03:44:21.094502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:28.781 [2024-10-01 03:44:21.094508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:28.781 [2024-10-01 03:44:21.094513] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:28.781 [2024-10-01 03:44:21.094519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:28.781 [2024-10-01 03:44:21.094524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:28.781 [2024-10-01 03:44:21.094530] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:28.781 [2024-10-01 03:44:21.094536] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:28.781 [2024-10-01 03:44:21.094548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.781 [2024-10-01 03:44:21.094554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:28.781 [2024-10-01 03:44:21.094560] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:28.781 [2024-10-01 03:44:21.094565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:28.781 [2024-10-01 03:44:21.094571] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:28.781 [2024-10-01 03:44:21.094576] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:28.781 [2024-10-01 03:44:21.094582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:28.781 [2024-10-01 03:44:21.094587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:28.781 [2024-10-01 03:44:21.094594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:28.781 [2024-10-01 03:44:21.094599] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:28.781 [2024-10-01 03:44:21.094605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:28.781 [2024-10-01 03:44:21.094610] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:28.781 [2024-10-01 03:44:21.094615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:28.781 [2024-10-01 03:44:21.094621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:28.781 [2024-10-01 03:44:21.094626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:28.781 [2024-10-01 03:44:21.094632] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:28.781 [2024-10-01 03:44:21.094639] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:28.781 [2024-10-01 03:44:21.094645] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:28.781 [2024-10-01 03:44:21.094651] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:28.781 [2024-10-01 03:44:21.094657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:28.781 [2024-10-01 03:44:21.094662] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:28.781 [2024-10-01 03:44:21.094668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.781 [2024-10-01 03:44:21.094676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:28.781 [2024-10-01 03:44:21.094683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:17:28.781 [2024-10-01 03:44:21.094689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.781 [2024-10-01 03:44:21.136063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.781 [2024-10-01 03:44:21.136295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:28.781 [2024-10-01 03:44:21.136314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.316 ms 00:17:28.781 [2024-10-01 03:44:21.136323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.781 [2024-10-01 03:44:21.136486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.781 [2024-10-01 03:44:21.136496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:28.781 [2024-10-01 03:44:21.136503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:17:28.781 [2024-10-01 03:44:21.136510] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.162946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.162996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:28.782 [2024-10-01 03:44:21.163020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.415 ms 00:17:28.782 [2024-10-01 03:44:21.163026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.163139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.163165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:28.782 [2024-10-01 03:44:21.163173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:28.782 [2024-10-01 03:44:21.163182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.163574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.163592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:28.782 [2024-10-01 03:44:21.163604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:17:28.782 [2024-10-01 03:44:21.163610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.163745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.163754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:28.782 [2024-10-01 03:44:21.163761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:17:28.782 [2024-10-01 03:44:21.163767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.175046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.175074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:28.782 [2024-10-01 03:44:21.175083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.259 ms 00:17:28.782 [2024-10-01 03:44:21.175092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.185238] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:17:28.782 [2024-10-01 03:44:21.185272] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:28.782 [2024-10-01 03:44:21.185283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.185290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:28.782 [2024-10-01 03:44:21.185297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.087 ms 00:17:28.782 [2024-10-01 03:44:21.185303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.204568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.204614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:28.782 [2024-10-01 03:44:21.204632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.193 ms 00:17:28.782 [2024-10-01 03:44:21.204639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.214183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.214369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:28.782 [2024-10-01 03:44:21.214384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.449 ms 00:17:28.782 [2024-10-01 03:44:21.214390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.223223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.223250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:28.782 [2024-10-01 03:44:21.223259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.764 ms 00:17:28.782 [2024-10-01 03:44:21.223265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.223760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.223777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:28.782 [2024-10-01 03:44:21.223784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:17:28.782 [2024-10-01 03:44:21.223791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.272038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.272099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:28.782 [2024-10-01 03:44:21.272111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.224 ms 00:17:28.782 [2024-10-01 03:44:21.272118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.281066] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:28.782 [2024-10-01 03:44:21.296353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.296395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:28.782 [2024-10-01 03:44:21.296408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.103 ms 00:17:28.782 [2024-10-01 03:44:21.296415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.296521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.296530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:28.782 [2024-10-01 03:44:21.296538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:28.782 [2024-10-01 03:44:21.296545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.296595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.296607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:28.782 [2024-10-01 03:44:21.296614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:28.782 [2024-10-01 03:44:21.296621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.296640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.296646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:28.782 [2024-10-01 03:44:21.296654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:28.782 [2024-10-01 03:44:21.296660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.296690] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:28.782 [2024-10-01 03:44:21.296699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.296706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:28.782 [2024-10-01 03:44:21.296715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:28.782 [2024-10-01 03:44:21.296721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.315719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.315889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:28.782 [2024-10-01 03:44:21.315906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.981 ms 00:17:28.782 [2024-10-01 03:44:21.315913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.316020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:28.782 [2024-10-01 03:44:21.316033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:28.782 [2024-10-01 03:44:21.316040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:28.782 [2024-10-01 03:44:21.316047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:28.782 [2024-10-01 03:44:21.316858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:28.782 [2024-10-01 03:44:21.319484] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 247.692 ms, result 0 00:17:28.782 [2024-10-01 03:44:21.320248] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:29.041 [2024-10-01 03:44:21.331236] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:34.714  Copying: 44/256 [MB] (44 MBps) Copying: 89/256 [MB] (44 MBps) Copying: 135/256 [MB] (45 MBps) Copying: 179/256 [MB] (43 MBps) Copying: 222/256 [MB] (43 MBps) Copying: 256/256 [MB] (average 45 MBps)[2024-10-01 03:44:27.016648] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:34.714 [2024-10-01 03:44:27.024346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.714 [2024-10-01 03:44:27.024497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:34.714 [2024-10-01 03:44:27.024551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:34.714 [2024-10-01 03:44:27.024571] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.714 [2024-10-01 03:44:27.024603] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:34.714 [2024-10-01 03:44:27.026856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.714 [2024-10-01 03:44:27.026947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:34.714 [2024-10-01 03:44:27.026995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.184 ms 00:17:34.714 [2024-10-01 03:44:27.027026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.714 [2024-10-01 03:44:27.028798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.714 [2024-10-01 03:44:27.028888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:34.714 [2024-10-01 03:44:27.028940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.732 ms 00:17:34.714 [2024-10-01 03:44:27.028958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.714 [2024-10-01 03:44:27.034552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.714 [2024-10-01 03:44:27.034642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:34.714 [2024-10-01 03:44:27.034703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.569 ms 00:17:34.714 [2024-10-01 03:44:27.034722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.714 [2024-10-01 03:44:27.040039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.715 [2024-10-01 03:44:27.040122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:34.715 [2024-10-01 03:44:27.040169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.284 ms 00:17:34.715 [2024-10-01 03:44:27.040187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.715 [2024-10-01 03:44:27.058672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.715 [2024-10-01 03:44:27.058780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:34.715 [2024-10-01 03:44:27.058820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.428 ms 00:17:34.715 [2024-10-01 03:44:27.058838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.715 [2024-10-01 03:44:27.070810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.715 [2024-10-01 03:44:27.070913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:34.715 [2024-10-01 03:44:27.070953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.911 ms 00:17:34.715 [2024-10-01 03:44:27.070970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.715 [2024-10-01 03:44:27.071086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.715 [2024-10-01 03:44:27.071110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:34.715 [2024-10-01 03:44:27.071126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:17:34.715 [2024-10-01 03:44:27.071163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.715 [2024-10-01 03:44:27.089722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.715 [2024-10-01 03:44:27.089835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:34.715 [2024-10-01 03:44:27.089876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.530 ms 00:17:34.715 [2024-10-01 03:44:27.089893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.715 [2024-10-01 03:44:27.107229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.715 [2024-10-01 03:44:27.107330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:34.715 [2024-10-01 03:44:27.107371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.301 ms 00:17:34.715 [2024-10-01 03:44:27.107388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.715 [2024-10-01 03:44:27.124717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.715 [2024-10-01 03:44:27.124835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:34.715 [2024-10-01 03:44:27.124901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.293 ms 00:17:34.715 [2024-10-01 03:44:27.124919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.715 [2024-10-01 03:44:27.142102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.715 [2024-10-01 03:44:27.142202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:34.715 [2024-10-01 03:44:27.142241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.116 ms 00:17:34.715 [2024-10-01 03:44:27.142257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.715 [2024-10-01 03:44:27.142293] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:34.715 [2024-10-01 03:44:27.142365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.142992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.143988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:34.715 [2024-10-01 03:44:27.144569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.144964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:34.716 [2024-10-01 03:44:27.145243] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:34.716 [2024-10-01 03:44:27.145250] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 53a1b279-b409-4899-8013-3cb089931240 00:17:34.716 [2024-10-01 03:44:27.145258] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:34.716 [2024-10-01 03:44:27.145264] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:34.716 [2024-10-01 03:44:27.145270] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:34.716 [2024-10-01 03:44:27.145279] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:34.716 [2024-10-01 03:44:27.145285] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:34.716 [2024-10-01 03:44:27.145292] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:34.716 [2024-10-01 03:44:27.145298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:34.716 [2024-10-01 03:44:27.145303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:34.716 [2024-10-01 03:44:27.145309] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:34.716 [2024-10-01 03:44:27.145314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.716 [2024-10-01 03:44:27.145321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:34.716 [2024-10-01 03:44:27.145327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.022 ms 00:17:34.716 [2024-10-01 03:44:27.145333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.716 [2024-10-01 03:44:27.155078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.716 [2024-10-01 03:44:27.155181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:34.716 [2024-10-01 03:44:27.155193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.726 ms 00:17:34.716 [2024-10-01 03:44:27.155199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.716 [2024-10-01 03:44:27.155502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:34.716 [2024-10-01 03:44:27.155511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:34.716 [2024-10-01 03:44:27.155519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:17:34.716 [2024-10-01 03:44:27.155524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.716 [2024-10-01 03:44:27.180178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.716 [2024-10-01 03:44:27.180215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:34.716 [2024-10-01 03:44:27.180224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.716 [2024-10-01 03:44:27.180230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.716 [2024-10-01 03:44:27.180324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.716 [2024-10-01 03:44:27.180333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:34.716 [2024-10-01 03:44:27.180340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.716 [2024-10-01 03:44:27.180346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.716 [2024-10-01 03:44:27.180387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.716 [2024-10-01 03:44:27.180398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:34.716 [2024-10-01 03:44:27.180404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.716 [2024-10-01 03:44:27.180411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.716 [2024-10-01 03:44:27.180425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.716 [2024-10-01 03:44:27.180432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:34.716 [2024-10-01 03:44:27.180438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.716 [2024-10-01 03:44:27.180444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.716 [2024-10-01 03:44:27.243006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.716 [2024-10-01 03:44:27.243068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:34.716 [2024-10-01 03:44:27.243080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.716 [2024-10-01 03:44:27.243087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.975 [2024-10-01 03:44:27.293840] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.975 [2024-10-01 03:44:27.293896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:34.975 [2024-10-01 03:44:27.293907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.975 [2024-10-01 03:44:27.293913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.975 [2024-10-01 03:44:27.293991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.975 [2024-10-01 03:44:27.293999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:34.975 [2024-10-01 03:44:27.294023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.975 [2024-10-01 03:44:27.294030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.975 [2024-10-01 03:44:27.294067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.975 [2024-10-01 03:44:27.294074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:34.975 [2024-10-01 03:44:27.294082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.975 [2024-10-01 03:44:27.294088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.975 [2024-10-01 03:44:27.294167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.975 [2024-10-01 03:44:27.294175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:34.975 [2024-10-01 03:44:27.294182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.975 [2024-10-01 03:44:27.294191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.975 [2024-10-01 03:44:27.294217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.975 [2024-10-01 03:44:27.294225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:34.975 [2024-10-01 03:44:27.294232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.975 [2024-10-01 03:44:27.294238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.975 [2024-10-01 03:44:27.294273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.975 [2024-10-01 03:44:27.294280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:34.975 [2024-10-01 03:44:27.294286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.975 [2024-10-01 03:44:27.294292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.975 [2024-10-01 03:44:27.294333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:34.975 [2024-10-01 03:44:27.294341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:34.975 [2024-10-01 03:44:27.294347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:34.975 [2024-10-01 03:44:27.294354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:34.975 [2024-10-01 03:44:27.294498] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 270.126 ms, result 0 00:17:35.542 00:17:35.542 00:17:35.542 03:44:27 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:35.542 03:44:27 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=74364 00:17:35.542 03:44:27 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 74364 00:17:35.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.542 03:44:27 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74364 ']' 00:17:35.542 03:44:27 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.542 03:44:27 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:35.542 03:44:27 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.542 03:44:27 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:35.542 03:44:27 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:35.542 [2024-10-01 03:44:28.045464] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:35.542 [2024-10-01 03:44:28.045590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74364 ] 00:17:35.800 [2024-10-01 03:44:28.193521] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:36.058 [2024-10-01 03:44:28.384329] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:36.623 03:44:28 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:36.623 03:44:28 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:36.623 03:44:28 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:36.623 [2024-10-01 03:44:29.120748] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:36.623 [2024-10-01 03:44:29.121058] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:36.881 [2024-10-01 03:44:29.291028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.881 [2024-10-01 03:44:29.291082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:36.881 [2024-10-01 03:44:29.291096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:36.881 [2024-10-01 03:44:29.291107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.881 [2024-10-01 03:44:29.293279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.881 [2024-10-01 03:44:29.293309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:36.881 [2024-10-01 03:44:29.293320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.157 ms 00:17:36.881 [2024-10-01 03:44:29.293326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.881 [2024-10-01 03:44:29.293391] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:36.881 [2024-10-01 03:44:29.293909] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:36.881 [2024-10-01 03:44:29.293929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.881 [2024-10-01 03:44:29.293936] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:36.881 [2024-10-01 03:44:29.293944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:17:36.881 [2024-10-01 03:44:29.293950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.881 [2024-10-01 03:44:29.295436] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:36.881 [2024-10-01 03:44:29.305740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.881 [2024-10-01 03:44:29.305770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:36.881 [2024-10-01 03:44:29.305779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.308 ms 00:17:36.881 [2024-10-01 03:44:29.305787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.881 [2024-10-01 03:44:29.305858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.881 [2024-10-01 03:44:29.305870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:36.881 [2024-10-01 03:44:29.305876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:17:36.881 [2024-10-01 03:44:29.305884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.881 [2024-10-01 03:44:29.312115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.881 [2024-10-01 03:44:29.312322] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:36.881 [2024-10-01 03:44:29.312335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.193 ms 00:17:36.881 [2024-10-01 03:44:29.312344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.881 [2024-10-01 03:44:29.312443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.881 [2024-10-01 03:44:29.312454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:36.881 [2024-10-01 03:44:29.312461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:17:36.882 [2024-10-01 03:44:29.312468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.882 [2024-10-01 03:44:29.312489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.882 [2024-10-01 03:44:29.312498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:36.882 [2024-10-01 03:44:29.312505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:36.882 [2024-10-01 03:44:29.312512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.882 [2024-10-01 03:44:29.312532] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:36.882 [2024-10-01 03:44:29.315582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.882 [2024-10-01 03:44:29.315694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:36.882 [2024-10-01 03:44:29.315709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.053 ms 00:17:36.882 [2024-10-01 03:44:29.315718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.882 [2024-10-01 03:44:29.315754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.882 [2024-10-01 03:44:29.315762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:36.882 [2024-10-01 03:44:29.315770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:36.882 [2024-10-01 03:44:29.315776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.882 [2024-10-01 03:44:29.315795] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:36.882 [2024-10-01 03:44:29.315813] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:36.882 [2024-10-01 03:44:29.315848] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:36.882 [2024-10-01 03:44:29.315863] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:36.882 [2024-10-01 03:44:29.315950] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:36.882 [2024-10-01 03:44:29.315960] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:36.882 [2024-10-01 03:44:29.315970] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:36.882 [2024-10-01 03:44:29.315979] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:36.882 [2024-10-01 03:44:29.315987] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:36.882 [2024-10-01 03:44:29.315994] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:36.882 [2024-10-01 03:44:29.316014] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:36.882 [2024-10-01 03:44:29.316020] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:36.882 [2024-10-01 03:44:29.316030] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:36.882 [2024-10-01 03:44:29.316040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.882 [2024-10-01 03:44:29.316049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:36.882 [2024-10-01 03:44:29.316056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:17:36.882 [2024-10-01 03:44:29.316063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.882 [2024-10-01 03:44:29.316137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.882 [2024-10-01 03:44:29.316146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:36.882 [2024-10-01 03:44:29.316152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:36.882 [2024-10-01 03:44:29.316159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.882 [2024-10-01 03:44:29.316240] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:36.882 [2024-10-01 03:44:29.316252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:36.882 [2024-10-01 03:44:29.316259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:36.882 [2024-10-01 03:44:29.316267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316274] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:36.882 [2024-10-01 03:44:29.316281] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:36.882 [2024-10-01 03:44:29.316297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:36.882 [2024-10-01 03:44:29.316303] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316310] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:36.882 [2024-10-01 03:44:29.316315] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:36.882 [2024-10-01 03:44:29.316322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:36.882 [2024-10-01 03:44:29.316328] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:36.882 [2024-10-01 03:44:29.316336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:36.882 [2024-10-01 03:44:29.316341] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:36.882 [2024-10-01 03:44:29.316347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:36.882 [2024-10-01 03:44:29.316361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:36.882 [2024-10-01 03:44:29.316372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:36.882 [2024-10-01 03:44:29.316384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.882 [2024-10-01 03:44:29.316396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:36.882 [2024-10-01 03:44:29.316404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316409] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.882 [2024-10-01 03:44:29.316415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:36.882 [2024-10-01 03:44:29.316421] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316428] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.882 [2024-10-01 03:44:29.316433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:36.882 [2024-10-01 03:44:29.316439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:36.882 [2024-10-01 03:44:29.316451] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:36.882 [2024-10-01 03:44:29.316456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:36.882 [2024-10-01 03:44:29.316468] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:36.882 [2024-10-01 03:44:29.316475] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:36.882 [2024-10-01 03:44:29.316479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:36.882 [2024-10-01 03:44:29.316486] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:36.882 [2024-10-01 03:44:29.316492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:36.882 [2024-10-01 03:44:29.316500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:36.882 [2024-10-01 03:44:29.316512] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:36.882 [2024-10-01 03:44:29.316517] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316523] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:36.882 [2024-10-01 03:44:29.316529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:36.882 [2024-10-01 03:44:29.316537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:36.882 [2024-10-01 03:44:29.316542] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:36.882 [2024-10-01 03:44:29.316550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:36.882 [2024-10-01 03:44:29.316555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:36.882 [2024-10-01 03:44:29.316563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:36.882 [2024-10-01 03:44:29.316569] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:36.882 [2024-10-01 03:44:29.316575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:36.882 [2024-10-01 03:44:29.316580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:36.882 [2024-10-01 03:44:29.316588] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:36.882 [2024-10-01 03:44:29.316597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:36.882 [2024-10-01 03:44:29.316607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:36.882 [2024-10-01 03:44:29.316612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:36.882 [2024-10-01 03:44:29.316620] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:36.882 [2024-10-01 03:44:29.316626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:36.882 [2024-10-01 03:44:29.316632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:36.882 [2024-10-01 03:44:29.316638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:36.882 [2024-10-01 03:44:29.316645] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:36.882 [2024-10-01 03:44:29.316650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:36.882 [2024-10-01 03:44:29.316658] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:36.882 [2024-10-01 03:44:29.316663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:36.882 [2024-10-01 03:44:29.316670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:36.882 [2024-10-01 03:44:29.316675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:36.883 [2024-10-01 03:44:29.316682] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:36.883 [2024-10-01 03:44:29.316687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:36.883 [2024-10-01 03:44:29.316694] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:36.883 [2024-10-01 03:44:29.316701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:36.883 [2024-10-01 03:44:29.316712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:36.883 [2024-10-01 03:44:29.316717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:36.883 [2024-10-01 03:44:29.316724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:36.883 [2024-10-01 03:44:29.316729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:36.883 [2024-10-01 03:44:29.316737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-10-01 03:44:29.316743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:36.883 [2024-10-01 03:44:29.316750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:17:36.883 [2024-10-01 03:44:29.316755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-10-01 03:44:29.340894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-10-01 03:44:29.340928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:36.883 [2024-10-01 03:44:29.340939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.076 ms 00:17:36.883 [2024-10-01 03:44:29.340946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-10-01 03:44:29.341062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-10-01 03:44:29.341071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:36.883 [2024-10-01 03:44:29.341079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:17:36.883 [2024-10-01 03:44:29.341085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-10-01 03:44:29.377981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-10-01 03:44:29.378049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:36.883 [2024-10-01 03:44:29.378074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.869 ms 00:17:36.883 [2024-10-01 03:44:29.378087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-10-01 03:44:29.378205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-10-01 03:44:29.378223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:36.883 [2024-10-01 03:44:29.378241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:36.883 [2024-10-01 03:44:29.378255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-10-01 03:44:29.378748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-10-01 03:44:29.378771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:36.883 [2024-10-01 03:44:29.378787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:17:36.883 [2024-10-01 03:44:29.378799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-10-01 03:44:29.378998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-10-01 03:44:29.379039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:36.883 [2024-10-01 03:44:29.379055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.164 ms 00:17:36.883 [2024-10-01 03:44:29.379067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-10-01 03:44:29.394990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-10-01 03:44:29.395029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:36.883 [2024-10-01 03:44:29.395041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.886 ms 00:17:36.883 [2024-10-01 03:44:29.395050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-10-01 03:44:29.405358] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:36.883 [2024-10-01 03:44:29.405552] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:36.883 [2024-10-01 03:44:29.405568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-10-01 03:44:29.405575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:36.883 [2024-10-01 03:44:29.405584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.407 ms 00:17:36.883 [2024-10-01 03:44:29.405591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:36.883 [2024-10-01 03:44:29.424623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:36.883 [2024-10-01 03:44:29.424781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:36.883 [2024-10-01 03:44:29.424800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.707 ms 00:17:36.883 [2024-10-01 03:44:29.424813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.433972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.141 [2024-10-01 03:44:29.434012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:37.141 [2024-10-01 03:44:29.434025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.101 ms 00:17:37.141 [2024-10-01 03:44:29.434032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.442803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.141 [2024-10-01 03:44:29.442829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:37.141 [2024-10-01 03:44:29.442838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.725 ms 00:17:37.141 [2024-10-01 03:44:29.442845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.443377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.141 [2024-10-01 03:44:29.443397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:37.141 [2024-10-01 03:44:29.443407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:17:37.141 [2024-10-01 03:44:29.443415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.492259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.141 [2024-10-01 03:44:29.492314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:37.141 [2024-10-01 03:44:29.492328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.821 ms 00:17:37.141 [2024-10-01 03:44:29.492338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.500694] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:37.141 [2024-10-01 03:44:29.515436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.141 [2024-10-01 03:44:29.515486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:37.141 [2024-10-01 03:44:29.515497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.008 ms 00:17:37.141 [2024-10-01 03:44:29.515505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.515618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.141 [2024-10-01 03:44:29.515629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:37.141 [2024-10-01 03:44:29.515637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:37.141 [2024-10-01 03:44:29.515644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.515695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.141 [2024-10-01 03:44:29.515705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:37.141 [2024-10-01 03:44:29.515711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:17:37.141 [2024-10-01 03:44:29.515720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.515740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.141 [2024-10-01 03:44:29.515748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:37.141 [2024-10-01 03:44:29.515758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:37.141 [2024-10-01 03:44:29.515767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.515796] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:37.141 [2024-10-01 03:44:29.515809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.141 [2024-10-01 03:44:29.515815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:37.141 [2024-10-01 03:44:29.515823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:37.141 [2024-10-01 03:44:29.515828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.534502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.141 [2024-10-01 03:44:29.534685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:37.141 [2024-10-01 03:44:29.534705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.652 ms 00:17:37.141 [2024-10-01 03:44:29.534714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.534798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.141 [2024-10-01 03:44:29.534806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:37.141 [2024-10-01 03:44:29.534815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:17:37.141 [2024-10-01 03:44:29.534821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.141 [2024-10-01 03:44:29.535878] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:37.141 [2024-10-01 03:44:29.538314] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 244.322 ms, result 0 00:17:37.141 [2024-10-01 03:44:29.539420] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:37.141 Some configs were skipped because the RPC state that can call them passed over. 00:17:37.141 03:44:29 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:37.400 [2024-10-01 03:44:29.763979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.400 [2024-10-01 03:44:29.764230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:37.400 [2024-10-01 03:44:29.764333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.603 ms 00:17:37.400 [2024-10-01 03:44:29.764354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.400 [2024-10-01 03:44:29.764401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.029 ms, result 0 00:17:37.400 true 00:17:37.400 03:44:29 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:37.659 [2024-10-01 03:44:29.967953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:37.659 [2024-10-01 03:44:29.968167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:37.659 [2024-10-01 03:44:29.968216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.369 ms 00:17:37.659 [2024-10-01 03:44:29.968234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:37.659 [2024-10-01 03:44:29.968281] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.703 ms, result 0 00:17:37.659 true 00:17:37.659 03:44:29 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 74364 00:17:37.659 03:44:29 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74364 ']' 00:17:37.659 03:44:29 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74364 00:17:37.659 03:44:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:37.659 03:44:29 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:37.659 03:44:29 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74364 00:17:37.659 killing process with pid 74364 00:17:37.659 03:44:30 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:37.659 03:44:30 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:37.659 03:44:30 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74364' 00:17:37.659 03:44:30 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74364 00:17:37.659 03:44:30 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74364 00:17:38.226 [2024-10-01 03:44:30.578359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.578641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:38.226 [2024-10-01 03:44:30.578659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:38.226 [2024-10-01 03:44:30.578668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.578694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:38.226 [2024-10-01 03:44:30.580900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.580932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:38.226 [2024-10-01 03:44:30.580944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.191 ms 00:17:38.226 [2024-10-01 03:44:30.580950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.581216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.581225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:38.226 [2024-10-01 03:44:30.581236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:17:38.226 [2024-10-01 03:44:30.581243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.584375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.584399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:38.226 [2024-10-01 03:44:30.584408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.115 ms 00:17:38.226 [2024-10-01 03:44:30.584414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.589697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.589722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:38.226 [2024-10-01 03:44:30.589732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.252 ms 00:17:38.226 [2024-10-01 03:44:30.589740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.597493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.597519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:38.226 [2024-10-01 03:44:30.597530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.693 ms 00:17:38.226 [2024-10-01 03:44:30.597536] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.604466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.604507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:38.226 [2024-10-01 03:44:30.604518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.896 ms 00:17:38.226 [2024-10-01 03:44:30.604530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.604631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.604641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:38.226 [2024-10-01 03:44:30.604649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:17:38.226 [2024-10-01 03:44:30.604658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.612828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.612853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:38.226 [2024-10-01 03:44:30.612863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.151 ms 00:17:38.226 [2024-10-01 03:44:30.612869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.620628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.620651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:38.226 [2024-10-01 03:44:30.620662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.724 ms 00:17:38.226 [2024-10-01 03:44:30.620667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.627737] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.627904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:38.226 [2024-10-01 03:44:30.627920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.031 ms 00:17:38.226 [2024-10-01 03:44:30.627926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.634870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.226 [2024-10-01 03:44:30.634972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:38.226 [2024-10-01 03:44:30.634986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.890 ms 00:17:38.226 [2024-10-01 03:44:30.634991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.226 [2024-10-01 03:44:30.635041] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:38.226 [2024-10-01 03:44:30.635056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:38.226 [2024-10-01 03:44:30.635160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635734] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:38.227 [2024-10-01 03:44:30.635746] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:38.227 [2024-10-01 03:44:30.635755] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 53a1b279-b409-4899-8013-3cb089931240 00:17:38.227 [2024-10-01 03:44:30.635761] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:38.227 [2024-10-01 03:44:30.635768] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:38.227 [2024-10-01 03:44:30.635773] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:38.227 [2024-10-01 03:44:30.635781] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:38.227 [2024-10-01 03:44:30.635792] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:38.228 [2024-10-01 03:44:30.635800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:38.228 [2024-10-01 03:44:30.635807] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:38.228 [2024-10-01 03:44:30.635813] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:38.228 [2024-10-01 03:44:30.635818] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:38.228 [2024-10-01 03:44:30.635825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.228 [2024-10-01 03:44:30.635832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:38.228 [2024-10-01 03:44:30.635839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.787 ms 00:17:38.228 [2024-10-01 03:44:30.635845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.228 [2024-10-01 03:44:30.646263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.228 [2024-10-01 03:44:30.646370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:38.228 [2024-10-01 03:44:30.646932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.400 ms 00:17:38.228 [2024-10-01 03:44:30.647239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.228 [2024-10-01 03:44:30.648583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:38.228 [2024-10-01 03:44:30.648801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:38.228 [2024-10-01 03:44:30.648973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.995 ms 00:17:38.228 [2024-10-01 03:44:30.649162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.228 [2024-10-01 03:44:30.697058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.228 [2024-10-01 03:44:30.697222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:38.228 [2024-10-01 03:44:30.697304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.228 [2024-10-01 03:44:30.697330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.228 [2024-10-01 03:44:30.697475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.228 [2024-10-01 03:44:30.697508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:38.228 [2024-10-01 03:44:30.697562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.228 [2024-10-01 03:44:30.697584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.228 [2024-10-01 03:44:30.697654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.228 [2024-10-01 03:44:30.697779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:38.228 [2024-10-01 03:44:30.697818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.228 [2024-10-01 03:44:30.697884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.228 [2024-10-01 03:44:30.697930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.228 [2024-10-01 03:44:30.697951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:38.228 [2024-10-01 03:44:30.698015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.228 [2024-10-01 03:44:30.698039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.487 [2024-10-01 03:44:30.777746] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.487 [2024-10-01 03:44:30.777975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:38.487 [2024-10-01 03:44:30.778065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.487 [2024-10-01 03:44:30.778093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.487 [2024-10-01 03:44:30.843065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.487 [2024-10-01 03:44:30.843290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:38.487 [2024-10-01 03:44:30.843380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.487 [2024-10-01 03:44:30.843403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.487 [2024-10-01 03:44:30.843525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.487 [2024-10-01 03:44:30.843550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:38.487 [2024-10-01 03:44:30.843574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.487 [2024-10-01 03:44:30.843593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.487 [2024-10-01 03:44:30.843704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.487 [2024-10-01 03:44:30.843731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:38.487 [2024-10-01 03:44:30.843754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.487 [2024-10-01 03:44:30.843773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.487 [2024-10-01 03:44:30.843886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.487 [2024-10-01 03:44:30.844032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:38.487 [2024-10-01 03:44:30.844060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.487 [2024-10-01 03:44:30.844080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.487 [2024-10-01 03:44:30.844168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.487 [2024-10-01 03:44:30.844271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:38.487 [2024-10-01 03:44:30.844344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.487 [2024-10-01 03:44:30.844355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.487 [2024-10-01 03:44:30.844411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.487 [2024-10-01 03:44:30.844421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:38.487 [2024-10-01 03:44:30.844433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.487 [2024-10-01 03:44:30.844441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.487 [2024-10-01 03:44:30.844493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:38.487 [2024-10-01 03:44:30.844506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:38.487 [2024-10-01 03:44:30.844516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:38.487 [2024-10-01 03:44:30.844525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:38.487 [2024-10-01 03:44:30.844674] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.286 ms, result 0 00:17:39.054 03:44:31 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:39.054 03:44:31 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:39.312 [2024-10-01 03:44:31.661233] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:39.312 [2024-10-01 03:44:31.661367] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74411 ] 00:17:39.312 [2024-10-01 03:44:31.809690] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.571 [2024-10-01 03:44:31.989347] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.830 [2024-10-01 03:44:32.218478] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:39.830 [2024-10-01 03:44:32.218545] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:39.830 [2024-10-01 03:44:32.372710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.830 [2024-10-01 03:44:32.372949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:39.830 [2024-10-01 03:44:32.372970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:39.830 [2024-10-01 03:44:32.372978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.830 [2024-10-01 03:44:32.375181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.830 [2024-10-01 03:44:32.375211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:39.830 [2024-10-01 03:44:32.375220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.185 ms 00:17:39.830 [2024-10-01 03:44:32.375228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.830 [2024-10-01 03:44:32.375295] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:39.830 [2024-10-01 03:44:32.375902] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:39.830 [2024-10-01 03:44:32.375930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:39.830 [2024-10-01 03:44:32.375940] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:39.830 [2024-10-01 03:44:32.375947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:17:39.830 [2024-10-01 03:44:32.375953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:39.830 [2024-10-01 03:44:32.377425] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:40.090 [2024-10-01 03:44:32.387651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.090 [2024-10-01 03:44:32.387770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:40.090 [2024-10-01 03:44:32.387784] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.227 ms 00:17:40.090 [2024-10-01 03:44:32.387791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.090 [2024-10-01 03:44:32.387859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.090 [2024-10-01 03:44:32.387868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:40.090 [2024-10-01 03:44:32.387878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:40.090 [2024-10-01 03:44:32.387884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.090 [2024-10-01 03:44:32.394147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.090 [2024-10-01 03:44:32.394264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:40.090 [2024-10-01 03:44:32.394276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.231 ms 00:17:40.090 [2024-10-01 03:44:32.394283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.090 [2024-10-01 03:44:32.394371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.090 [2024-10-01 03:44:32.394381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:40.090 [2024-10-01 03:44:32.394388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:40.090 [2024-10-01 03:44:32.394394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.090 [2024-10-01 03:44:32.394417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.090 [2024-10-01 03:44:32.394424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:40.090 [2024-10-01 03:44:32.394431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:40.090 [2024-10-01 03:44:32.394437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.090 [2024-10-01 03:44:32.394461] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:40.090 [2024-10-01 03:44:32.397457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.090 [2024-10-01 03:44:32.397560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:40.090 [2024-10-01 03:44:32.397572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:17:40.090 [2024-10-01 03:44:32.397578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.090 [2024-10-01 03:44:32.397612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.090 [2024-10-01 03:44:32.397624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:40.090 [2024-10-01 03:44:32.397631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:17:40.090 [2024-10-01 03:44:32.397637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.090 [2024-10-01 03:44:32.397652] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:40.090 [2024-10-01 03:44:32.397668] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:40.090 [2024-10-01 03:44:32.397699] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:40.090 [2024-10-01 03:44:32.397712] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:40.090 [2024-10-01 03:44:32.397797] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:40.090 [2024-10-01 03:44:32.397807] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:40.090 [2024-10-01 03:44:32.397816] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:40.090 [2024-10-01 03:44:32.397824] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:40.090 [2024-10-01 03:44:32.397833] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:40.090 [2024-10-01 03:44:32.397839] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:40.090 [2024-10-01 03:44:32.397845] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:40.090 [2024-10-01 03:44:32.397852] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:40.090 [2024-10-01 03:44:32.397857] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:40.090 [2024-10-01 03:44:32.397864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.090 [2024-10-01 03:44:32.397872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:40.090 [2024-10-01 03:44:32.397880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:17:40.090 [2024-10-01 03:44:32.397886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.090 [2024-10-01 03:44:32.397960] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.090 [2024-10-01 03:44:32.397968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:40.090 [2024-10-01 03:44:32.397975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:40.090 [2024-10-01 03:44:32.397980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.090 [2024-10-01 03:44:32.398075] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:40.090 [2024-10-01 03:44:32.398084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:40.090 [2024-10-01 03:44:32.398094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.090 [2024-10-01 03:44:32.398100] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.090 [2024-10-01 03:44:32.398107] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:40.090 [2024-10-01 03:44:32.398112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:40.090 [2024-10-01 03:44:32.398118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:40.090 [2024-10-01 03:44:32.398125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:40.090 [2024-10-01 03:44:32.398132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:40.090 [2024-10-01 03:44:32.398138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.090 [2024-10-01 03:44:32.398143] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:40.090 [2024-10-01 03:44:32.398154] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:40.090 [2024-10-01 03:44:32.398162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:40.090 [2024-10-01 03:44:32.398168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:40.090 [2024-10-01 03:44:32.398174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:40.090 [2024-10-01 03:44:32.398180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.090 [2024-10-01 03:44:32.398186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:40.090 [2024-10-01 03:44:32.398193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:40.090 [2024-10-01 03:44:32.398199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.090 [2024-10-01 03:44:32.398204] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:40.090 [2024-10-01 03:44:32.398210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:40.090 [2024-10-01 03:44:32.398217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.090 [2024-10-01 03:44:32.398222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:40.090 [2024-10-01 03:44:32.398228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:40.090 [2024-10-01 03:44:32.398234] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.090 [2024-10-01 03:44:32.398240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:40.090 [2024-10-01 03:44:32.398245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:40.090 [2024-10-01 03:44:32.398252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.090 [2024-10-01 03:44:32.398258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:40.090 [2024-10-01 03:44:32.398263] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:40.090 [2024-10-01 03:44:32.398268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:40.090 [2024-10-01 03:44:32.398273] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:40.090 [2024-10-01 03:44:32.398279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:40.091 [2024-10-01 03:44:32.398284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.091 [2024-10-01 03:44:32.398289] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:40.091 [2024-10-01 03:44:32.398294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:40.091 [2024-10-01 03:44:32.398299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:40.091 [2024-10-01 03:44:32.398305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:40.091 [2024-10-01 03:44:32.398310] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:40.091 [2024-10-01 03:44:32.398315] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.091 [2024-10-01 03:44:32.398320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:40.091 [2024-10-01 03:44:32.398325] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:40.091 [2024-10-01 03:44:32.398330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.091 [2024-10-01 03:44:32.398334] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:40.091 [2024-10-01 03:44:32.398340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:40.091 [2024-10-01 03:44:32.398346] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:40.091 [2024-10-01 03:44:32.398352] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:40.091 [2024-10-01 03:44:32.398358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:40.091 [2024-10-01 03:44:32.398363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:40.091 [2024-10-01 03:44:32.398369] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:40.091 [2024-10-01 03:44:32.398374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:40.091 [2024-10-01 03:44:32.398379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:40.091 [2024-10-01 03:44:32.398384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:40.091 [2024-10-01 03:44:32.398392] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:40.091 [2024-10-01 03:44:32.398402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.091 [2024-10-01 03:44:32.398409] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:40.091 [2024-10-01 03:44:32.398415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:40.091 [2024-10-01 03:44:32.398421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:40.091 [2024-10-01 03:44:32.398426] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:40.091 [2024-10-01 03:44:32.398432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:40.091 [2024-10-01 03:44:32.398445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:40.091 [2024-10-01 03:44:32.398451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:40.091 [2024-10-01 03:44:32.398457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:40.091 [2024-10-01 03:44:32.398463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:40.091 [2024-10-01 03:44:32.398469] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:40.091 [2024-10-01 03:44:32.398475] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:40.091 [2024-10-01 03:44:32.398481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:40.091 [2024-10-01 03:44:32.398486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:40.091 [2024-10-01 03:44:32.398492] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:40.091 [2024-10-01 03:44:32.398498] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:40.091 [2024-10-01 03:44:32.398504] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:40.091 [2024-10-01 03:44:32.398511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:40.091 [2024-10-01 03:44:32.398516] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:40.091 [2024-10-01 03:44:32.398522] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:40.091 [2024-10-01 03:44:32.398528] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:40.091 [2024-10-01 03:44:32.398534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.091 [2024-10-01 03:44:32.398541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:40.091 [2024-10-01 03:44:32.398548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.512 ms 00:17:40.091 [2024-10-01 03:44:32.398553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.091 [2024-10-01 03:44:32.436928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.091 [2024-10-01 03:44:32.436997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:40.091 [2024-10-01 03:44:32.437037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.319 ms 00:17:40.091 [2024-10-01 03:44:32.437052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.091 [2024-10-01 03:44:32.437302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.091 [2024-10-01 03:44:32.437321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:40.091 [2024-10-01 03:44:32.437334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:17:40.091 [2024-10-01 03:44:32.437345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.091 [2024-10-01 03:44:32.463694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.091 [2024-10-01 03:44:32.463729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:40.091 [2024-10-01 03:44:32.463739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.316 ms 00:17:40.091 [2024-10-01 03:44:32.463745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.091 [2024-10-01 03:44:32.463816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.091 [2024-10-01 03:44:32.463824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:40.091 [2024-10-01 03:44:32.463831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:17:40.091 [2024-10-01 03:44:32.463838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.091 [2024-10-01 03:44:32.464259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.091 [2024-10-01 03:44:32.464273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:40.091 [2024-10-01 03:44:32.464281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.401 ms 00:17:40.091 [2024-10-01 03:44:32.464287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.091 [2024-10-01 03:44:32.464427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.091 [2024-10-01 03:44:32.464436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:40.091 [2024-10-01 03:44:32.464443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:17:40.091 [2024-10-01 03:44:32.464449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.091 [2024-10-01 03:44:32.475793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.091 [2024-10-01 03:44:32.475980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:40.091 [2024-10-01 03:44:32.475993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.326 ms 00:17:40.091 [2024-10-01 03:44:32.476012] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.091 [2024-10-01 03:44:32.486262] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:40.091 [2024-10-01 03:44:32.486386] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:40.091 [2024-10-01 03:44:32.486399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.091 [2024-10-01 03:44:32.486406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:40.091 [2024-10-01 03:44:32.486414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.288 ms 00:17:40.092 [2024-10-01 03:44:32.486420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.505338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.505453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:40.092 [2024-10-01 03:44:32.505470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.848 ms 00:17:40.092 [2024-10-01 03:44:32.505477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.514545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.514572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:40.092 [2024-10-01 03:44:32.514579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.008 ms 00:17:40.092 [2024-10-01 03:44:32.514586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.523314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.523340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:40.092 [2024-10-01 03:44:32.523347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.684 ms 00:17:40.092 [2024-10-01 03:44:32.523353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.523845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.523904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:40.092 [2024-10-01 03:44:32.523912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:17:40.092 [2024-10-01 03:44:32.523918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.572147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.572200] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:40.092 [2024-10-01 03:44:32.572213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.209 ms 00:17:40.092 [2024-10-01 03:44:32.572219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.580700] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:40.092 [2024-10-01 03:44:32.595812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.596009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:40.092 [2024-10-01 03:44:32.596026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.476 ms 00:17:40.092 [2024-10-01 03:44:32.596032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.596131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.596141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:40.092 [2024-10-01 03:44:32.596150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:40.092 [2024-10-01 03:44:32.596157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.596212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.596223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:40.092 [2024-10-01 03:44:32.596230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:40.092 [2024-10-01 03:44:32.596236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.596255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.596262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:40.092 [2024-10-01 03:44:32.596268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:40.092 [2024-10-01 03:44:32.596274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.596305] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:40.092 [2024-10-01 03:44:32.596313] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.596321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:40.092 [2024-10-01 03:44:32.596328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:40.092 [2024-10-01 03:44:32.596334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.615052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.615194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:40.092 [2024-10-01 03:44:32.615211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.702 ms 00:17:40.092 [2024-10-01 03:44:32.615218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.615301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:40.092 [2024-10-01 03:44:32.615311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:40.092 [2024-10-01 03:44:32.615318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:17:40.092 [2024-10-01 03:44:32.615324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:40.092 [2024-10-01 03:44:32.616108] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:40.092 [2024-10-01 03:44:32.618378] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 243.115 ms, result 0 00:17:40.092 [2024-10-01 03:44:32.619167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:40.092 [2024-10-01 03:44:32.633910] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:45.776  Copying: 46/256 [MB] (46 MBps) Copying: 92/256 [MB] (46 MBps) Copying: 136/256 [MB] (43 MBps) Copying: 181/256 [MB] (45 MBps) Copying: 233/256 [MB] (51 MBps) Copying: 256/256 [MB] (average 46 MBps)[2024-10-01 03:44:38.162597] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:45.776 [2024-10-01 03:44:38.172462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.172505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:45.776 [2024-10-01 03:44:38.172520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:45.776 [2024-10-01 03:44:38.172528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.172551] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:45.776 [2024-10-01 03:44:38.175359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.175389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:45.776 [2024-10-01 03:44:38.175400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.794 ms 00:17:45.776 [2024-10-01 03:44:38.175409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.175690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.175703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:45.776 [2024-10-01 03:44:38.175712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.241 ms 00:17:45.776 [2024-10-01 03:44:38.175720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.179425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.179446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:45.776 [2024-10-01 03:44:38.179455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.690 ms 00:17:45.776 [2024-10-01 03:44:38.179463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.186353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.186379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:45.776 [2024-10-01 03:44:38.186394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.872 ms 00:17:45.776 [2024-10-01 03:44:38.186402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.210328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.210364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:45.776 [2024-10-01 03:44:38.210376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.859 ms 00:17:45.776 [2024-10-01 03:44:38.210383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.224583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.224619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:45.776 [2024-10-01 03:44:38.224631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.177 ms 00:17:45.776 [2024-10-01 03:44:38.224639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.224778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.224789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:45.776 [2024-10-01 03:44:38.224798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.089 ms 00:17:45.776 [2024-10-01 03:44:38.224806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.248470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.248736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:45.776 [2024-10-01 03:44:38.248757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.642 ms 00:17:45.776 [2024-10-01 03:44:38.248765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.271481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.271630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:45.776 [2024-10-01 03:44:38.271647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.687 ms 00:17:45.776 [2024-10-01 03:44:38.271655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.294017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.294157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:45.776 [2024-10-01 03:44:38.294174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.337 ms 00:17:45.776 [2024-10-01 03:44:38.294182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.316494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.776 [2024-10-01 03:44:38.316528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:45.776 [2024-10-01 03:44:38.316539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.258 ms 00:17:45.776 [2024-10-01 03:44:38.316547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:45.776 [2024-10-01 03:44:38.316570] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:45.776 [2024-10-01 03:44:38.316585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:45.776 [2024-10-01 03:44:38.316913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.316920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.316929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.316936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.316944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.316952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.316960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.316968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.316976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.316984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.316991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.316999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:45.777 [2024-10-01 03:44:38.317440] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:45.777 [2024-10-01 03:44:38.317448] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 53a1b279-b409-4899-8013-3cb089931240 00:17:45.777 [2024-10-01 03:44:38.317456] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:45.777 [2024-10-01 03:44:38.317463] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:45.777 [2024-10-01 03:44:38.317470] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:45.777 [2024-10-01 03:44:38.317482] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:45.777 [2024-10-01 03:44:38.317489] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:45.777 [2024-10-01 03:44:38.317497] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:45.777 [2024-10-01 03:44:38.317505] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:45.777 [2024-10-01 03:44:38.317511] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:45.777 [2024-10-01 03:44:38.317518] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:45.777 [2024-10-01 03:44:38.317525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:45.777 [2024-10-01 03:44:38.317534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:45.777 [2024-10-01 03:44:38.317543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.956 ms 00:17:45.777 [2024-10-01 03:44:38.317550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.079 [2024-10-01 03:44:38.330554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.079 [2024-10-01 03:44:38.330592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:46.079 [2024-10-01 03:44:38.330604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.986 ms 00:17:46.079 [2024-10-01 03:44:38.330612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.079 [2024-10-01 03:44:38.330983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:46.079 [2024-10-01 03:44:38.331016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:46.079 [2024-10-01 03:44:38.331027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:17:46.079 [2024-10-01 03:44:38.331034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.079 [2024-10-01 03:44:38.362915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.079 [2024-10-01 03:44:38.362958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:46.079 [2024-10-01 03:44:38.362970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.079 [2024-10-01 03:44:38.362978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.079 [2024-10-01 03:44:38.363078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.080 [2024-10-01 03:44:38.363088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:46.080 [2024-10-01 03:44:38.363103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.080 [2024-10-01 03:44:38.363110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.080 [2024-10-01 03:44:38.363160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.080 [2024-10-01 03:44:38.363173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:46.080 [2024-10-01 03:44:38.363181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.080 [2024-10-01 03:44:38.363188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.080 [2024-10-01 03:44:38.363205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.080 [2024-10-01 03:44:38.363213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:46.080 [2024-10-01 03:44:38.363221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.080 [2024-10-01 03:44:38.363228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.080 [2024-10-01 03:44:38.445406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.080 [2024-10-01 03:44:38.445475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:46.080 [2024-10-01 03:44:38.445489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.080 [2024-10-01 03:44:38.445497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.080 [2024-10-01 03:44:38.511519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.080 [2024-10-01 03:44:38.511583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:46.080 [2024-10-01 03:44:38.511597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.080 [2024-10-01 03:44:38.511605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.080 [2024-10-01 03:44:38.511667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.080 [2024-10-01 03:44:38.511677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:46.080 [2024-10-01 03:44:38.511690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.080 [2024-10-01 03:44:38.511697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.080 [2024-10-01 03:44:38.511728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.080 [2024-10-01 03:44:38.511737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:46.080 [2024-10-01 03:44:38.511746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.080 [2024-10-01 03:44:38.511755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.080 [2024-10-01 03:44:38.511849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.080 [2024-10-01 03:44:38.511860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:46.080 [2024-10-01 03:44:38.511869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.080 [2024-10-01 03:44:38.511879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.080 [2024-10-01 03:44:38.511911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.080 [2024-10-01 03:44:38.511920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:46.080 [2024-10-01 03:44:38.511928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.080 [2024-10-01 03:44:38.511936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.080 [2024-10-01 03:44:38.511975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.080 [2024-10-01 03:44:38.511985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:46.080 [2024-10-01 03:44:38.511992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.080 [2024-10-01 03:44:38.512025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.080 [2024-10-01 03:44:38.512087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:46.080 [2024-10-01 03:44:38.512098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:46.080 [2024-10-01 03:44:38.512107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:46.080 [2024-10-01 03:44:38.512115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:46.080 [2024-10-01 03:44:38.512271] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.793 ms, result 0 00:17:47.029 00:17:47.029 00:17:47.029 03:44:39 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:17:47.029 03:44:39 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:17:47.594 03:44:39 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:47.594 [2024-10-01 03:44:39.901702] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:47.594 [2024-10-01 03:44:39.901829] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74504 ] 00:17:47.594 [2024-10-01 03:44:40.050132] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:47.852 [2024-10-01 03:44:40.235937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:48.111 [2024-10-01 03:44:40.466246] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:48.111 [2024-10-01 03:44:40.466312] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:48.111 [2024-10-01 03:44:40.615883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.111 [2024-10-01 03:44:40.615942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:48.111 [2024-10-01 03:44:40.615957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:48.111 [2024-10-01 03:44:40.615964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.111 [2024-10-01 03:44:40.618303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.111 [2024-10-01 03:44:40.618334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.111 [2024-10-01 03:44:40.618343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.324 ms 00:17:48.111 [2024-10-01 03:44:40.618352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.111 [2024-10-01 03:44:40.618417] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:48.111 [2024-10-01 03:44:40.619115] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:48.111 [2024-10-01 03:44:40.619133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.111 [2024-10-01 03:44:40.619144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.111 [2024-10-01 03:44:40.619152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:17:48.111 [2024-10-01 03:44:40.619158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.111 [2024-10-01 03:44:40.620474] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:48.111 [2024-10-01 03:44:40.630711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.111 [2024-10-01 03:44:40.630739] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:48.111 [2024-10-01 03:44:40.630748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.238 ms 00:17:48.111 [2024-10-01 03:44:40.630755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.111 [2024-10-01 03:44:40.630832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.111 [2024-10-01 03:44:40.630840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:48.111 [2024-10-01 03:44:40.630849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:48.111 [2024-10-01 03:44:40.630855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.111 [2024-10-01 03:44:40.637050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.111 [2024-10-01 03:44:40.637241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.111 [2024-10-01 03:44:40.637254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.160 ms 00:17:48.111 [2024-10-01 03:44:40.637261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.111 [2024-10-01 03:44:40.637343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.111 [2024-10-01 03:44:40.637354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.111 [2024-10-01 03:44:40.637361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:17:48.111 [2024-10-01 03:44:40.637367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.111 [2024-10-01 03:44:40.637389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.111 [2024-10-01 03:44:40.637397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:48.111 [2024-10-01 03:44:40.637405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:48.111 [2024-10-01 03:44:40.637412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.111 [2024-10-01 03:44:40.637429] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:48.111 [2024-10-01 03:44:40.640607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.111 [2024-10-01 03:44:40.640728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.111 [2024-10-01 03:44:40.640741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.182 ms 00:17:48.111 [2024-10-01 03:44:40.640748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.111 [2024-10-01 03:44:40.640785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.111 [2024-10-01 03:44:40.640796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:48.111 [2024-10-01 03:44:40.640804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:48.111 [2024-10-01 03:44:40.640811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.111 [2024-10-01 03:44:40.640838] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:48.111 [2024-10-01 03:44:40.640856] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:48.111 [2024-10-01 03:44:40.640886] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:48.111 [2024-10-01 03:44:40.640899] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:48.111 [2024-10-01 03:44:40.640985] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:48.111 [2024-10-01 03:44:40.640994] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:48.111 [2024-10-01 03:44:40.641017] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:48.111 [2024-10-01 03:44:40.641026] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:48.111 [2024-10-01 03:44:40.641033] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:48.112 [2024-10-01 03:44:40.641040] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:48.112 [2024-10-01 03:44:40.641047] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:48.112 [2024-10-01 03:44:40.641054] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:48.112 [2024-10-01 03:44:40.641061] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:48.112 [2024-10-01 03:44:40.641068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.112 [2024-10-01 03:44:40.641077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:48.112 [2024-10-01 03:44:40.641083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:17:48.112 [2024-10-01 03:44:40.641089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.112 [2024-10-01 03:44:40.641158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.112 [2024-10-01 03:44:40.641166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:48.112 [2024-10-01 03:44:40.641172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:48.112 [2024-10-01 03:44:40.641179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.112 [2024-10-01 03:44:40.641257] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:48.112 [2024-10-01 03:44:40.641266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:48.112 [2024-10-01 03:44:40.641275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.112 [2024-10-01 03:44:40.641281] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:48.112 [2024-10-01 03:44:40.641293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:48.112 [2024-10-01 03:44:40.641305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:48.112 [2024-10-01 03:44:40.641312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.112 [2024-10-01 03:44:40.641323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:48.112 [2024-10-01 03:44:40.641333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:48.112 [2024-10-01 03:44:40.641339] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:48.112 [2024-10-01 03:44:40.641345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:48.112 [2024-10-01 03:44:40.641351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:48.112 [2024-10-01 03:44:40.641357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641362] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:48.112 [2024-10-01 03:44:40.641367] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:48.112 [2024-10-01 03:44:40.641372] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641379] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:48.112 [2024-10-01 03:44:40.641385] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.112 [2024-10-01 03:44:40.641395] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:48.112 [2024-10-01 03:44:40.641400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.112 [2024-10-01 03:44:40.641410] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:48.112 [2024-10-01 03:44:40.641415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.112 [2024-10-01 03:44:40.641426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:48.112 [2024-10-01 03:44:40.641431] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:48.112 [2024-10-01 03:44:40.641441] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:48.112 [2024-10-01 03:44:40.641446] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641451] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.112 [2024-10-01 03:44:40.641457] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:48.112 [2024-10-01 03:44:40.641462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:48.112 [2024-10-01 03:44:40.641467] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:48.112 [2024-10-01 03:44:40.641472] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:48.112 [2024-10-01 03:44:40.641477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:48.112 [2024-10-01 03:44:40.641482] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:48.112 [2024-10-01 03:44:40.641492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:48.112 [2024-10-01 03:44:40.641498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641503] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:48.112 [2024-10-01 03:44:40.641509] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:48.112 [2024-10-01 03:44:40.641515] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:48.112 [2024-10-01 03:44:40.641522] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:48.112 [2024-10-01 03:44:40.641528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:48.112 [2024-10-01 03:44:40.641534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:48.112 [2024-10-01 03:44:40.641539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:48.112 [2024-10-01 03:44:40.641545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:48.112 [2024-10-01 03:44:40.641552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:48.112 [2024-10-01 03:44:40.641557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:48.112 [2024-10-01 03:44:40.641564] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:48.112 [2024-10-01 03:44:40.641574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.112 [2024-10-01 03:44:40.641581] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:48.112 [2024-10-01 03:44:40.641587] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:48.112 [2024-10-01 03:44:40.641593] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:48.112 [2024-10-01 03:44:40.641598] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:48.112 [2024-10-01 03:44:40.641604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:48.112 [2024-10-01 03:44:40.641609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:48.112 [2024-10-01 03:44:40.641615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:48.112 [2024-10-01 03:44:40.641621] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:48.112 [2024-10-01 03:44:40.641627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:48.112 [2024-10-01 03:44:40.641632] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:48.112 [2024-10-01 03:44:40.641637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:48.112 [2024-10-01 03:44:40.641643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:48.112 [2024-10-01 03:44:40.641648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:48.112 [2024-10-01 03:44:40.641655] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:48.112 [2024-10-01 03:44:40.641660] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:48.112 [2024-10-01 03:44:40.641667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:48.113 [2024-10-01 03:44:40.641674] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:48.113 [2024-10-01 03:44:40.641679] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:48.113 [2024-10-01 03:44:40.641684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:48.113 [2024-10-01 03:44:40.641691] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:48.113 [2024-10-01 03:44:40.641697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.113 [2024-10-01 03:44:40.641704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:48.113 [2024-10-01 03:44:40.641709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.492 ms 00:17:48.113 [2024-10-01 03:44:40.641716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.679863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.679911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.371 [2024-10-01 03:44:40.679921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.093 ms 00:17:48.371 [2024-10-01 03:44:40.679928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.680076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.680089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:48.371 [2024-10-01 03:44:40.680097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:17:48.371 [2024-10-01 03:44:40.680103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.706455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.706679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.371 [2024-10-01 03:44:40.706694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.334 ms 00:17:48.371 [2024-10-01 03:44:40.706702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.706778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.706786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.371 [2024-10-01 03:44:40.706793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:48.371 [2024-10-01 03:44:40.706800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.707219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.707234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.371 [2024-10-01 03:44:40.707242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.402 ms 00:17:48.371 [2024-10-01 03:44:40.707249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.707396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.707405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.371 [2024-10-01 03:44:40.707413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:17:48.371 [2024-10-01 03:44:40.707419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.718880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.718909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.371 [2024-10-01 03:44:40.718918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.442 ms 00:17:48.371 [2024-10-01 03:44:40.718924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.729105] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:17:48.371 [2024-10-01 03:44:40.729269] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:48.371 [2024-10-01 03:44:40.729283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.729291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:48.371 [2024-10-01 03:44:40.729298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.247 ms 00:17:48.371 [2024-10-01 03:44:40.729305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.748320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.748453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:48.371 [2024-10-01 03:44:40.748472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.952 ms 00:17:48.371 [2024-10-01 03:44:40.748479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.757521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.757552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:48.371 [2024-10-01 03:44:40.757560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.979 ms 00:17:48.371 [2024-10-01 03:44:40.757566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.766211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.766239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:48.371 [2024-10-01 03:44:40.766248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.596 ms 00:17:48.371 [2024-10-01 03:44:40.766254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.766769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.371 [2024-10-01 03:44:40.766786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:48.371 [2024-10-01 03:44:40.766795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.444 ms 00:17:48.371 [2024-10-01 03:44:40.766801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.371 [2024-10-01 03:44:40.815121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.372 [2024-10-01 03:44:40.815345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:48.372 [2024-10-01 03:44:40.815364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.298 ms 00:17:48.372 [2024-10-01 03:44:40.815372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.372 [2024-10-01 03:44:40.823866] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:48.372 [2024-10-01 03:44:40.839334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.372 [2024-10-01 03:44:40.839378] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:48.372 [2024-10-01 03:44:40.839390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.860 ms 00:17:48.372 [2024-10-01 03:44:40.839397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.372 [2024-10-01 03:44:40.839500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.372 [2024-10-01 03:44:40.839510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:48.372 [2024-10-01 03:44:40.839519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:48.372 [2024-10-01 03:44:40.839525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.372 [2024-10-01 03:44:40.839579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.372 [2024-10-01 03:44:40.839588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:48.372 [2024-10-01 03:44:40.839595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:48.372 [2024-10-01 03:44:40.839601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.372 [2024-10-01 03:44:40.839621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.372 [2024-10-01 03:44:40.839628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:48.372 [2024-10-01 03:44:40.839635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:48.372 [2024-10-01 03:44:40.839641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.372 [2024-10-01 03:44:40.839672] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:48.372 [2024-10-01 03:44:40.839681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.372 [2024-10-01 03:44:40.839689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:48.372 [2024-10-01 03:44:40.839696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:48.372 [2024-10-01 03:44:40.839702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.372 [2024-10-01 03:44:40.858315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.372 [2024-10-01 03:44:40.858497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:48.372 [2024-10-01 03:44:40.858514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.595 ms 00:17:48.372 [2024-10-01 03:44:40.858522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.372 [2024-10-01 03:44:40.858610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.372 [2024-10-01 03:44:40.858620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:48.372 [2024-10-01 03:44:40.858628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:17:48.372 [2024-10-01 03:44:40.858634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.372 [2024-10-01 03:44:40.859435] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:48.372 [2024-10-01 03:44:40.861745] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 243.286 ms, result 0 00:17:48.372 [2024-10-01 03:44:40.862610] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:48.372 [2024-10-01 03:44:40.877617] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:48.631  Copying: 4096/4096 [kB] (average 50 MBps)[2024-10-01 03:44:40.961165] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:48.631 [2024-10-01 03:44:40.968765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.631 [2024-10-01 03:44:40.968911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:48.631 [2024-10-01 03:44:40.968929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:48.631 [2024-10-01 03:44:40.968936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.631 [2024-10-01 03:44:40.968957] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:48.631 [2024-10-01 03:44:40.971179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.631 [2024-10-01 03:44:40.971201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:48.631 [2024-10-01 03:44:40.971210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.210 ms 00:17:48.631 [2024-10-01 03:44:40.971218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.631 [2024-10-01 03:44:40.972994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.631 [2024-10-01 03:44:40.973036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:48.631 [2024-10-01 03:44:40.973044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.758 ms 00:17:48.631 [2024-10-01 03:44:40.973051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.631 [2024-10-01 03:44:40.976133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.631 [2024-10-01 03:44:40.976156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:48.631 [2024-10-01 03:44:40.976164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.069 ms 00:17:48.631 [2024-10-01 03:44:40.976171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.631 [2024-10-01 03:44:40.981419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.631 [2024-10-01 03:44:40.981442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:48.631 [2024-10-01 03:44:40.981453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.226 ms 00:17:48.631 [2024-10-01 03:44:40.981460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.631 [2024-10-01 03:44:40.999629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.631 [2024-10-01 03:44:40.999760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:48.631 [2024-10-01 03:44:40.999775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.119 ms 00:17:48.632 [2024-10-01 03:44:40.999782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.011554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.632 [2024-10-01 03:44:41.011586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:48.632 [2024-10-01 03:44:41.011597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.744 ms 00:17:48.632 [2024-10-01 03:44:41.011604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.011712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.632 [2024-10-01 03:44:41.011721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:48.632 [2024-10-01 03:44:41.011729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:17:48.632 [2024-10-01 03:44:41.011735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.030095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.632 [2024-10-01 03:44:41.030226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:48.632 [2024-10-01 03:44:41.030240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.341 ms 00:17:48.632 [2024-10-01 03:44:41.030247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.047810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.632 [2024-10-01 03:44:41.047841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:48.632 [2024-10-01 03:44:41.047851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.525 ms 00:17:48.632 [2024-10-01 03:44:41.047856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.065363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.632 [2024-10-01 03:44:41.065501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:48.632 [2024-10-01 03:44:41.065514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.475 ms 00:17:48.632 [2024-10-01 03:44:41.065520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.082602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.632 [2024-10-01 03:44:41.082631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:48.632 [2024-10-01 03:44:41.082639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.027 ms 00:17:48.632 [2024-10-01 03:44:41.082645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.082675] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:48.632 [2024-10-01 03:44:41.082689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.082999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:48.632 [2024-10-01 03:44:41.083353] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:48.632 [2024-10-01 03:44:41.083360] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 53a1b279-b409-4899-8013-3cb089931240 00:17:48.632 [2024-10-01 03:44:41.083366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:48.632 [2024-10-01 03:44:41.083372] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:48.632 [2024-10-01 03:44:41.083381] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:48.632 [2024-10-01 03:44:41.083388] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:48.632 [2024-10-01 03:44:41.083394] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:48.632 [2024-10-01 03:44:41.083401] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:48.632 [2024-10-01 03:44:41.083406] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:48.632 [2024-10-01 03:44:41.083412] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:48.632 [2024-10-01 03:44:41.083417] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:48.632 [2024-10-01 03:44:41.083423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.632 [2024-10-01 03:44:41.083429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:48.632 [2024-10-01 03:44:41.083435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:17:48.632 [2024-10-01 03:44:41.083441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.093824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.632 [2024-10-01 03:44:41.093856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:48.632 [2024-10-01 03:44:41.093866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.367 ms 00:17:48.632 [2024-10-01 03:44:41.093872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.094196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:48.632 [2024-10-01 03:44:41.094206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:48.632 [2024-10-01 03:44:41.094213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:17:48.632 [2024-10-01 03:44:41.094219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.119407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.632 [2024-10-01 03:44:41.119442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:48.632 [2024-10-01 03:44:41.119451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.632 [2024-10-01 03:44:41.119458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.119532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.632 [2024-10-01 03:44:41.119539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:48.632 [2024-10-01 03:44:41.119547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.632 [2024-10-01 03:44:41.119553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.119590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.632 [2024-10-01 03:44:41.119601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:48.632 [2024-10-01 03:44:41.119608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.632 [2024-10-01 03:44:41.119615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.632 [2024-10-01 03:44:41.119631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.632 [2024-10-01 03:44:41.119637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:48.632 [2024-10-01 03:44:41.119643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.632 [2024-10-01 03:44:41.119649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.891 [2024-10-01 03:44:41.183214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.891 [2024-10-01 03:44:41.183447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:48.891 [2024-10-01 03:44:41.183465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.891 [2024-10-01 03:44:41.183472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.891 [2024-10-01 03:44:41.234498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.891 [2024-10-01 03:44:41.234727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:48.891 [2024-10-01 03:44:41.234741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.891 [2024-10-01 03:44:41.234748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.891 [2024-10-01 03:44:41.234808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.891 [2024-10-01 03:44:41.234816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:48.891 [2024-10-01 03:44:41.234827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.891 [2024-10-01 03:44:41.234834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.891 [2024-10-01 03:44:41.234859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.891 [2024-10-01 03:44:41.234866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:48.891 [2024-10-01 03:44:41.234873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.891 [2024-10-01 03:44:41.234880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.891 [2024-10-01 03:44:41.234962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.891 [2024-10-01 03:44:41.234972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:48.891 [2024-10-01 03:44:41.234979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.891 [2024-10-01 03:44:41.234988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.891 [2024-10-01 03:44:41.235031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.891 [2024-10-01 03:44:41.235040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:48.891 [2024-10-01 03:44:41.235047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.891 [2024-10-01 03:44:41.235054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.891 [2024-10-01 03:44:41.235091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.891 [2024-10-01 03:44:41.235099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:48.891 [2024-10-01 03:44:41.235106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.891 [2024-10-01 03:44:41.235115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.891 [2024-10-01 03:44:41.235157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:48.891 [2024-10-01 03:44:41.235166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:48.891 [2024-10-01 03:44:41.235172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:48.891 [2024-10-01 03:44:41.235179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:48.891 [2024-10-01 03:44:41.235312] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 266.530 ms, result 0 00:17:49.456 00:17:49.456 00:17:49.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.456 03:44:41 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=74524 00:17:49.456 03:44:41 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 74524 00:17:49.456 03:44:41 ftl.ftl_trim -- common/autotest_common.sh@831 -- # '[' -z 74524 ']' 00:17:49.456 03:44:41 ftl.ftl_trim -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.456 03:44:41 ftl.ftl_trim -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:49.456 03:44:41 ftl.ftl_trim -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.456 03:44:41 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:17:49.456 03:44:41 ftl.ftl_trim -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:49.456 03:44:41 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:17:49.456 [2024-10-01 03:44:41.991327] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:49.456 [2024-10-01 03:44:41.991606] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74524 ] 00:17:49.714 [2024-10-01 03:44:42.137876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:49.973 [2024-10-01 03:44:42.323935] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:50.540 03:44:42 ftl.ftl_trim -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:50.540 03:44:42 ftl.ftl_trim -- common/autotest_common.sh@864 -- # return 0 00:17:50.540 03:44:42 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:17:50.540 [2024-10-01 03:44:43.051127] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:50.540 [2024-10-01 03:44:43.051201] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:50.800 [2024-10-01 03:44:43.221158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.221374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:50.800 [2024-10-01 03:44:43.221394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:50.800 [2024-10-01 03:44:43.221404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.223660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.223692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:50.800 [2024-10-01 03:44:43.223703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.238 ms 00:17:50.800 [2024-10-01 03:44:43.223709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.223776] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:50.800 [2024-10-01 03:44:43.224349] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:50.800 [2024-10-01 03:44:43.224375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.224382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:50.800 [2024-10-01 03:44:43.224391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.608 ms 00:17:50.800 [2024-10-01 03:44:43.224397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.225729] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:50.800 [2024-10-01 03:44:43.236114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.236146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:50.800 [2024-10-01 03:44:43.236156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.387 ms 00:17:50.800 [2024-10-01 03:44:43.236164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.236232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.236244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:50.800 [2024-10-01 03:44:43.236251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:50.800 [2024-10-01 03:44:43.236259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.242685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.242717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:50.800 [2024-10-01 03:44:43.242726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.388 ms 00:17:50.800 [2024-10-01 03:44:43.242734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.242830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.242841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:50.800 [2024-10-01 03:44:43.242848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:17:50.800 [2024-10-01 03:44:43.242856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.242877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.242887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:50.800 [2024-10-01 03:44:43.242894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:50.800 [2024-10-01 03:44:43.242901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.242921] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:50.800 [2024-10-01 03:44:43.246079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.246103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:50.800 [2024-10-01 03:44:43.246113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.162 ms 00:17:50.800 [2024-10-01 03:44:43.246121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.246155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.246162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:50.800 [2024-10-01 03:44:43.246170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:50.800 [2024-10-01 03:44:43.246176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.246196] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:50.800 [2024-10-01 03:44:43.246214] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:50.800 [2024-10-01 03:44:43.246250] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:50.800 [2024-10-01 03:44:43.246265] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:50.800 [2024-10-01 03:44:43.246351] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:50.800 [2024-10-01 03:44:43.246361] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:50.800 [2024-10-01 03:44:43.246371] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:50.800 [2024-10-01 03:44:43.246380] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:50.800 [2024-10-01 03:44:43.246389] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:50.800 [2024-10-01 03:44:43.246395] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:50.800 [2024-10-01 03:44:43.246402] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:50.800 [2024-10-01 03:44:43.246408] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:50.800 [2024-10-01 03:44:43.246417] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:50.800 [2024-10-01 03:44:43.246426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.246434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:50.800 [2024-10-01 03:44:43.246440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:17:50.800 [2024-10-01 03:44:43.246447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.246537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.800 [2024-10-01 03:44:43.246546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:50.800 [2024-10-01 03:44:43.246552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:50.800 [2024-10-01 03:44:43.246560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.800 [2024-10-01 03:44:43.246650] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:50.801 [2024-10-01 03:44:43.246661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:50.801 [2024-10-01 03:44:43.246668] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.801 [2024-10-01 03:44:43.246676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:50.801 [2024-10-01 03:44:43.246690] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:50.801 [2024-10-01 03:44:43.246705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:50.801 [2024-10-01 03:44:43.246711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.801 [2024-10-01 03:44:43.246724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:50.801 [2024-10-01 03:44:43.246731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:50.801 [2024-10-01 03:44:43.246736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:50.801 [2024-10-01 03:44:43.246742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:50.801 [2024-10-01 03:44:43.246747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:50.801 [2024-10-01 03:44:43.246754] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:50.801 [2024-10-01 03:44:43.246769] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:50.801 [2024-10-01 03:44:43.246780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:50.801 [2024-10-01 03:44:43.246791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.801 [2024-10-01 03:44:43.246803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:50.801 [2024-10-01 03:44:43.246811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246816] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.801 [2024-10-01 03:44:43.246823] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:50.801 [2024-10-01 03:44:43.246828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.801 [2024-10-01 03:44:43.246839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:50.801 [2024-10-01 03:44:43.246846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246851] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:50.801 [2024-10-01 03:44:43.246859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:50.801 [2024-10-01 03:44:43.246864] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.801 [2024-10-01 03:44:43.246876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:50.801 [2024-10-01 03:44:43.246882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:50.801 [2024-10-01 03:44:43.246887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:50.801 [2024-10-01 03:44:43.246894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:50.801 [2024-10-01 03:44:43.246899] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:50.801 [2024-10-01 03:44:43.246906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246912] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:50.801 [2024-10-01 03:44:43.246919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:50.801 [2024-10-01 03:44:43.246924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246930] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:50.801 [2024-10-01 03:44:43.246936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:50.801 [2024-10-01 03:44:43.246943] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:50.801 [2024-10-01 03:44:43.246950] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:50.801 [2024-10-01 03:44:43.246958] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:50.801 [2024-10-01 03:44:43.246962] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:50.801 [2024-10-01 03:44:43.246971] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:50.801 [2024-10-01 03:44:43.246977] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:50.801 [2024-10-01 03:44:43.246983] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:50.801 [2024-10-01 03:44:43.246988] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:50.801 [2024-10-01 03:44:43.246996] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:50.801 [2024-10-01 03:44:43.247020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.801 [2024-10-01 03:44:43.247030] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:50.801 [2024-10-01 03:44:43.247036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:50.801 [2024-10-01 03:44:43.247045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:50.801 [2024-10-01 03:44:43.247051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:50.801 [2024-10-01 03:44:43.247059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:50.801 [2024-10-01 03:44:43.247065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:50.801 [2024-10-01 03:44:43.247071] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:50.801 [2024-10-01 03:44:43.247077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:50.801 [2024-10-01 03:44:43.247084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:50.801 [2024-10-01 03:44:43.247090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:50.801 [2024-10-01 03:44:43.247097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:50.801 [2024-10-01 03:44:43.247103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:50.801 [2024-10-01 03:44:43.247110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:50.801 [2024-10-01 03:44:43.247115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:50.801 [2024-10-01 03:44:43.247122] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:50.801 [2024-10-01 03:44:43.247129] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:50.801 [2024-10-01 03:44:43.247150] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:50.801 [2024-10-01 03:44:43.247155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:50.801 [2024-10-01 03:44:43.247162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:50.801 [2024-10-01 03:44:43.247168] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:50.801 [2024-10-01 03:44:43.247175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.801 [2024-10-01 03:44:43.247181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:50.802 [2024-10-01 03:44:43.247188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.585 ms 00:17:50.802 [2024-10-01 03:44:43.247194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.802 [2024-10-01 03:44:43.271480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.802 [2024-10-01 03:44:43.271684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:50.802 [2024-10-01 03:44:43.271702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.222 ms 00:17:50.802 [2024-10-01 03:44:43.271709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.802 [2024-10-01 03:44:43.271838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.802 [2024-10-01 03:44:43.271846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:50.802 [2024-10-01 03:44:43.271854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:50.802 [2024-10-01 03:44:43.271860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.802 [2024-10-01 03:44:43.309721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.802 [2024-10-01 03:44:43.309797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:50.802 [2024-10-01 03:44:43.309828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.831 ms 00:17:50.802 [2024-10-01 03:44:43.309844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.802 [2024-10-01 03:44:43.310024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.802 [2024-10-01 03:44:43.310049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:50.802 [2024-10-01 03:44:43.310070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:50.802 [2024-10-01 03:44:43.310088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.802 [2024-10-01 03:44:43.310628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.802 [2024-10-01 03:44:43.310664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:50.802 [2024-10-01 03:44:43.310684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.497 ms 00:17:50.802 [2024-10-01 03:44:43.310700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.802 [2024-10-01 03:44:43.310935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.802 [2024-10-01 03:44:43.310952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:50.802 [2024-10-01 03:44:43.310970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.192 ms 00:17:50.802 [2024-10-01 03:44:43.310985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.802 [2024-10-01 03:44:43.327777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.802 [2024-10-01 03:44:43.327806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:50.802 [2024-10-01 03:44:43.327816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.729 ms 00:17:50.802 [2024-10-01 03:44:43.327824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:50.802 [2024-10-01 03:44:43.338133] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:50.802 [2024-10-01 03:44:43.338162] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:50.802 [2024-10-01 03:44:43.338174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:50.802 [2024-10-01 03:44:43.338181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:50.802 [2024-10-01 03:44:43.338190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.230 ms 00:17:50.802 [2024-10-01 03:44:43.338197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.357191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.357224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:51.061 [2024-10-01 03:44:43.357235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.925 ms 00:17:51.061 [2024-10-01 03:44:43.357247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.366624] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.366650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:51.061 [2024-10-01 03:44:43.366661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.305 ms 00:17:51.061 [2024-10-01 03:44:43.366667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.375341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.375529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:51.061 [2024-10-01 03:44:43.375544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.626 ms 00:17:51.061 [2024-10-01 03:44:43.375550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.376067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.376084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:51.061 [2024-10-01 03:44:43.376093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.449 ms 00:17:51.061 [2024-10-01 03:44:43.376101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.424020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.424082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:51.061 [2024-10-01 03:44:43.424097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.897 ms 00:17:51.061 [2024-10-01 03:44:43.424106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.432392] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:51.061 [2024-10-01 03:44:43.447555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.447612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:51.061 [2024-10-01 03:44:43.447624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.346 ms 00:17:51.061 [2024-10-01 03:44:43.447634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.447752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.447763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:51.061 [2024-10-01 03:44:43.447772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:51.061 [2024-10-01 03:44:43.447780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.447828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.447837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:51.061 [2024-10-01 03:44:43.447844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:17:51.061 [2024-10-01 03:44:43.447852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.447873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.447882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:51.061 [2024-10-01 03:44:43.447891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:51.061 [2024-10-01 03:44:43.447903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.447933] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:51.061 [2024-10-01 03:44:43.447947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.447954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:51.061 [2024-10-01 03:44:43.447962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:17:51.061 [2024-10-01 03:44:43.447968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.466869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.467080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:51.061 [2024-10-01 03:44:43.467100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.879 ms 00:17:51.061 [2024-10-01 03:44:43.467109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.467190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.061 [2024-10-01 03:44:43.467199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:51.061 [2024-10-01 03:44:43.467209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:51.061 [2024-10-01 03:44:43.467215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.061 [2024-10-01 03:44:43.467996] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:51.061 [2024-10-01 03:44:43.470293] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 246.571 ms, result 0 00:17:51.062 [2024-10-01 03:44:43.471715] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:51.062 Some configs were skipped because the RPC state that can call them passed over. 00:17:51.062 03:44:43 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:17:51.320 [2024-10-01 03:44:43.699970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.320 [2024-10-01 03:44:43.700051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:51.320 [2024-10-01 03:44:43.700063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.470 ms 00:17:51.320 [2024-10-01 03:44:43.700072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.320 [2024-10-01 03:44:43.700101] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.609 ms, result 0 00:17:51.320 true 00:17:51.320 03:44:43 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:17:51.579 [2024-10-01 03:44:43.911904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:51.579 [2024-10-01 03:44:43.911965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:17:51.579 [2024-10-01 03:44:43.911978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.219 ms 00:17:51.579 [2024-10-01 03:44:43.911984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:51.579 [2024-10-01 03:44:43.912029] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.339 ms, result 0 00:17:51.579 true 00:17:51.579 03:44:43 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 74524 00:17:51.579 03:44:43 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74524 ']' 00:17:51.579 03:44:43 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74524 00:17:51.579 03:44:43 ftl.ftl_trim -- common/autotest_common.sh@955 -- # uname 00:17:51.579 03:44:43 ftl.ftl_trim -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:51.579 03:44:43 ftl.ftl_trim -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74524 00:17:51.579 03:44:43 ftl.ftl_trim -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:51.579 03:44:43 ftl.ftl_trim -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:51.579 03:44:43 ftl.ftl_trim -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74524' 00:17:51.579 killing process with pid 74524 00:17:51.579 03:44:43 ftl.ftl_trim -- common/autotest_common.sh@969 -- # kill 74524 00:17:51.579 03:44:43 ftl.ftl_trim -- common/autotest_common.sh@974 -- # wait 74524 00:17:52.147 [2024-10-01 03:44:44.537376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.147 [2024-10-01 03:44:44.537641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:52.147 [2024-10-01 03:44:44.537752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:52.147 [2024-10-01 03:44:44.537775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.147 [2024-10-01 03:44:44.537814] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:17:52.147 [2024-10-01 03:44:44.540110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.147 [2024-10-01 03:44:44.540218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:52.147 [2024-10-01 03:44:44.540274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.262 ms 00:17:52.147 [2024-10-01 03:44:44.540292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.147 [2024-10-01 03:44:44.540554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.147 [2024-10-01 03:44:44.540584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:52.147 [2024-10-01 03:44:44.540605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.212 ms 00:17:52.147 [2024-10-01 03:44:44.540683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.147 [2024-10-01 03:44:44.543835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.147 [2024-10-01 03:44:44.543928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:52.147 [2024-10-01 03:44:44.543982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.126 ms 00:17:52.147 [2024-10-01 03:44:44.544000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.147 [2024-10-01 03:44:44.549241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.147 [2024-10-01 03:44:44.549336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:52.147 [2024-10-01 03:44:44.549412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.187 ms 00:17:52.147 [2024-10-01 03:44:44.549433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.147 [2024-10-01 03:44:44.557014] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.147 [2024-10-01 03:44:44.557107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:52.147 [2024-10-01 03:44:44.557124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.510 ms 00:17:52.147 [2024-10-01 03:44:44.557130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.147 [2024-10-01 03:44:44.564068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.147 [2024-10-01 03:44:44.564171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:52.147 [2024-10-01 03:44:44.564187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.908 ms 00:17:52.147 [2024-10-01 03:44:44.564202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.147 [2024-10-01 03:44:44.564315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.147 [2024-10-01 03:44:44.564324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:52.147 [2024-10-01 03:44:44.564333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:17:52.147 [2024-10-01 03:44:44.564341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.147 [2024-10-01 03:44:44.572205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.147 [2024-10-01 03:44:44.572230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:52.147 [2024-10-01 03:44:44.572238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.844 ms 00:17:52.147 [2024-10-01 03:44:44.572245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.147 [2024-10-01 03:44:44.579755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.148 [2024-10-01 03:44:44.579851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:52.148 [2024-10-01 03:44:44.579869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.478 ms 00:17:52.148 [2024-10-01 03:44:44.579875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.148 [2024-10-01 03:44:44.587049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.148 [2024-10-01 03:44:44.587072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:52.148 [2024-10-01 03:44:44.587081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.144 ms 00:17:52.148 [2024-10-01 03:44:44.587087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.148 [2024-10-01 03:44:44.594066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.148 [2024-10-01 03:44:44.594157] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:52.148 [2024-10-01 03:44:44.594170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.924 ms 00:17:52.148 [2024-10-01 03:44:44.594176] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.148 [2024-10-01 03:44:44.594203] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:52.148 [2024-10-01 03:44:44.594216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594338] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:52.148 [2024-10-01 03:44:44.594686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594870] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:52.149 [2024-10-01 03:44:44.594903] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:52.149 [2024-10-01 03:44:44.594912] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 53a1b279-b409-4899-8013-3cb089931240 00:17:52.149 [2024-10-01 03:44:44.594919] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:52.149 [2024-10-01 03:44:44.594926] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:52.149 [2024-10-01 03:44:44.594932] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:52.149 [2024-10-01 03:44:44.594939] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:52.149 [2024-10-01 03:44:44.594950] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:52.149 [2024-10-01 03:44:44.594958] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:52.149 [2024-10-01 03:44:44.594965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:52.149 [2024-10-01 03:44:44.594972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:52.149 [2024-10-01 03:44:44.594977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:52.149 [2024-10-01 03:44:44.594984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.149 [2024-10-01 03:44:44.594989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:52.149 [2024-10-01 03:44:44.594997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:17:52.149 [2024-10-01 03:44:44.595018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.149 [2024-10-01 03:44:44.605009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.149 [2024-10-01 03:44:44.605034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:52.149 [2024-10-01 03:44:44.605046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.965 ms 00:17:52.149 [2024-10-01 03:44:44.605052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.149 [2024-10-01 03:44:44.605380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:52.149 [2024-10-01 03:44:44.605394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:52.149 [2024-10-01 03:44:44.605403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:17:52.149 [2024-10-01 03:44:44.605409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.149 [2024-10-01 03:44:44.637544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.149 [2024-10-01 03:44:44.637584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:52.149 [2024-10-01 03:44:44.637594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.149 [2024-10-01 03:44:44.637604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.149 [2024-10-01 03:44:44.637709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.149 [2024-10-01 03:44:44.637717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:52.149 [2024-10-01 03:44:44.637725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.149 [2024-10-01 03:44:44.637732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.149 [2024-10-01 03:44:44.637772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.149 [2024-10-01 03:44:44.637780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:52.149 [2024-10-01 03:44:44.637790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.149 [2024-10-01 03:44:44.637796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.149 [2024-10-01 03:44:44.637815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.149 [2024-10-01 03:44:44.637823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:52.149 [2024-10-01 03:44:44.637830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.149 [2024-10-01 03:44:44.637837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-10-01 03:44:44.701062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.409 [2024-10-01 03:44:44.701120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:52.409 [2024-10-01 03:44:44.701134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.409 [2024-10-01 03:44:44.701144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-10-01 03:44:44.752379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.409 [2024-10-01 03:44:44.752612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:52.409 [2024-10-01 03:44:44.752630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.409 [2024-10-01 03:44:44.752637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-10-01 03:44:44.752742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.409 [2024-10-01 03:44:44.752751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:52.409 [2024-10-01 03:44:44.752761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.409 [2024-10-01 03:44:44.752768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-10-01 03:44:44.752797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.409 [2024-10-01 03:44:44.752807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:52.409 [2024-10-01 03:44:44.752814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.409 [2024-10-01 03:44:44.752821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-10-01 03:44:44.752903] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.409 [2024-10-01 03:44:44.752912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:52.409 [2024-10-01 03:44:44.752920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.409 [2024-10-01 03:44:44.752927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-10-01 03:44:44.752957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.409 [2024-10-01 03:44:44.752965] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:52.409 [2024-10-01 03:44:44.752975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.409 [2024-10-01 03:44:44.752982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-10-01 03:44:44.753034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.409 [2024-10-01 03:44:44.753042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:52.409 [2024-10-01 03:44:44.753051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.409 [2024-10-01 03:44:44.753058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-10-01 03:44:44.753099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:52.409 [2024-10-01 03:44:44.753109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:52.409 [2024-10-01 03:44:44.753117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:52.409 [2024-10-01 03:44:44.753124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:52.409 [2024-10-01 03:44:44.753248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 215.853 ms, result 0 00:17:53.344 03:44:45 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:53.344 [2024-10-01 03:44:45.599461] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:17:53.344 [2024-10-01 03:44:45.599578] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74582 ] 00:17:53.344 [2024-10-01 03:44:45.741934] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:53.603 [2024-10-01 03:44:45.924508] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:17:53.862 [2024-10-01 03:44:46.153332] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:53.862 [2024-10-01 03:44:46.153613] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:17:53.862 [2024-10-01 03:44:46.307438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.862 [2024-10-01 03:44:46.307488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:53.862 [2024-10-01 03:44:46.307502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:53.862 [2024-10-01 03:44:46.307509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.862 [2024-10-01 03:44:46.309941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.862 [2024-10-01 03:44:46.309977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:53.862 [2024-10-01 03:44:46.309987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.416 ms 00:17:53.862 [2024-10-01 03:44:46.309996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.862 [2024-10-01 03:44:46.310088] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:53.862 [2024-10-01 03:44:46.310670] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:53.862 [2024-10-01 03:44:46.310696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.862 [2024-10-01 03:44:46.310705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:53.862 [2024-10-01 03:44:46.310712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.615 ms 00:17:53.862 [2024-10-01 03:44:46.310719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.862 [2024-10-01 03:44:46.312170] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:17:53.862 [2024-10-01 03:44:46.322505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.862 [2024-10-01 03:44:46.322697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:17:53.862 [2024-10-01 03:44:46.322712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.337 ms 00:17:53.862 [2024-10-01 03:44:46.322719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.862 [2024-10-01 03:44:46.322791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.862 [2024-10-01 03:44:46.322801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:17:53.862 [2024-10-01 03:44:46.322810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:17:53.862 [2024-10-01 03:44:46.322816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.862 [2024-10-01 03:44:46.329086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.862 [2024-10-01 03:44:46.329229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:53.862 [2024-10-01 03:44:46.329241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.234 ms 00:17:53.863 [2024-10-01 03:44:46.329249] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.863 [2024-10-01 03:44:46.329335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.863 [2024-10-01 03:44:46.329346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:53.863 [2024-10-01 03:44:46.329353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:17:53.863 [2024-10-01 03:44:46.329359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.863 [2024-10-01 03:44:46.329384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.863 [2024-10-01 03:44:46.329392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:53.863 [2024-10-01 03:44:46.329398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:53.863 [2024-10-01 03:44:46.329405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.863 [2024-10-01 03:44:46.329423] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:17:53.863 [2024-10-01 03:44:46.332433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.863 [2024-10-01 03:44:46.332541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:53.863 [2024-10-01 03:44:46.332553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.015 ms 00:17:53.863 [2024-10-01 03:44:46.332560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.863 [2024-10-01 03:44:46.332592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.863 [2024-10-01 03:44:46.332604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:53.863 [2024-10-01 03:44:46.332611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:53.863 [2024-10-01 03:44:46.332618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.863 [2024-10-01 03:44:46.332633] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:17:53.863 [2024-10-01 03:44:46.332650] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:17:53.863 [2024-10-01 03:44:46.332679] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:17:53.863 [2024-10-01 03:44:46.332692] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:17:53.863 [2024-10-01 03:44:46.332779] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:53.863 [2024-10-01 03:44:46.332788] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:53.863 [2024-10-01 03:44:46.332797] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:53.863 [2024-10-01 03:44:46.332805] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:53.863 [2024-10-01 03:44:46.332813] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:53.863 [2024-10-01 03:44:46.332819] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:17:53.863 [2024-10-01 03:44:46.332825] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:53.863 [2024-10-01 03:44:46.332832] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:53.863 [2024-10-01 03:44:46.332838] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:53.863 [2024-10-01 03:44:46.332844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.863 [2024-10-01 03:44:46.332853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:53.863 [2024-10-01 03:44:46.332861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:17:53.863 [2024-10-01 03:44:46.332867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.863 [2024-10-01 03:44:46.332940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.863 [2024-10-01 03:44:46.332948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:53.863 [2024-10-01 03:44:46.332955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:17:53.863 [2024-10-01 03:44:46.332961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.863 [2024-10-01 03:44:46.333052] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:53.863 [2024-10-01 03:44:46.333063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:53.863 [2024-10-01 03:44:46.333072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:53.863 [2024-10-01 03:44:46.333078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:53.863 [2024-10-01 03:44:46.333090] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:17:53.863 [2024-10-01 03:44:46.333102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:53.863 [2024-10-01 03:44:46.333109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333115] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:53.863 [2024-10-01 03:44:46.333121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:53.863 [2024-10-01 03:44:46.333134] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:17:53.863 [2024-10-01 03:44:46.333139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:53.863 [2024-10-01 03:44:46.333152] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:53.863 [2024-10-01 03:44:46.333157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:17:53.863 [2024-10-01 03:44:46.333163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:53.863 [2024-10-01 03:44:46.333174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:17:53.863 [2024-10-01 03:44:46.333180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333187] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:53.863 [2024-10-01 03:44:46.333193] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333198] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.863 [2024-10-01 03:44:46.333203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:53.863 [2024-10-01 03:44:46.333208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333214] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.863 [2024-10-01 03:44:46.333219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:53.863 [2024-10-01 03:44:46.333224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333229] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.863 [2024-10-01 03:44:46.333235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:53.863 [2024-10-01 03:44:46.333240] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:53.863 [2024-10-01 03:44:46.333250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:53.863 [2024-10-01 03:44:46.333255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:53.863 [2024-10-01 03:44:46.333265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:53.863 [2024-10-01 03:44:46.333271] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:17:53.863 [2024-10-01 03:44:46.333276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:53.863 [2024-10-01 03:44:46.333282] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:53.863 [2024-10-01 03:44:46.333287] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:17:53.863 [2024-10-01 03:44:46.333292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333297] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:53.863 [2024-10-01 03:44:46.333302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:17:53.863 [2024-10-01 03:44:46.333307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333313] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:53.863 [2024-10-01 03:44:46.333320] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:53.863 [2024-10-01 03:44:46.333327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:53.863 [2024-10-01 03:44:46.333333] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:53.863 [2024-10-01 03:44:46.333340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:53.863 [2024-10-01 03:44:46.333345] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:53.863 [2024-10-01 03:44:46.333350] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:53.863 [2024-10-01 03:44:46.333355] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:53.863 [2024-10-01 03:44:46.333361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:53.863 [2024-10-01 03:44:46.333367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:53.863 [2024-10-01 03:44:46.333373] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:53.863 [2024-10-01 03:44:46.333383] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:53.863 [2024-10-01 03:44:46.333390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:17:53.863 [2024-10-01 03:44:46.333396] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:17:53.863 [2024-10-01 03:44:46.333402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:17:53.863 [2024-10-01 03:44:46.333408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:17:53.863 [2024-10-01 03:44:46.333414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:17:53.863 [2024-10-01 03:44:46.333420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:17:53.863 [2024-10-01 03:44:46.333425] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:17:53.863 [2024-10-01 03:44:46.333430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:17:53.863 [2024-10-01 03:44:46.333436] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:17:53.864 [2024-10-01 03:44:46.333441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:17:53.864 [2024-10-01 03:44:46.333447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:17:53.864 [2024-10-01 03:44:46.333452] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:17:53.864 [2024-10-01 03:44:46.333458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:17:53.864 [2024-10-01 03:44:46.333467] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:17:53.864 [2024-10-01 03:44:46.333472] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:53.864 [2024-10-01 03:44:46.333479] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:53.864 [2024-10-01 03:44:46.333485] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:53.864 [2024-10-01 03:44:46.333490] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:53.864 [2024-10-01 03:44:46.333496] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:53.864 [2024-10-01 03:44:46.333502] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:53.864 [2024-10-01 03:44:46.333509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.864 [2024-10-01 03:44:46.333518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:53.864 [2024-10-01 03:44:46.333524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.524 ms 00:17:53.864 [2024-10-01 03:44:46.333529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.864 [2024-10-01 03:44:46.371499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.864 [2024-10-01 03:44:46.371545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:53.864 [2024-10-01 03:44:46.371556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.916 ms 00:17:53.864 [2024-10-01 03:44:46.371563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.864 [2024-10-01 03:44:46.371686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.864 [2024-10-01 03:44:46.371697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:53.864 [2024-10-01 03:44:46.371705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:17:53.864 [2024-10-01 03:44:46.371712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.864 [2024-10-01 03:44:46.397920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.864 [2024-10-01 03:44:46.397953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:53.864 [2024-10-01 03:44:46.397962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.188 ms 00:17:53.864 [2024-10-01 03:44:46.397968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.864 [2024-10-01 03:44:46.398045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.864 [2024-10-01 03:44:46.398055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:53.864 [2024-10-01 03:44:46.398062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:53.864 [2024-10-01 03:44:46.398085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.864 [2024-10-01 03:44:46.398479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.864 [2024-10-01 03:44:46.398493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:53.864 [2024-10-01 03:44:46.398502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.373 ms 00:17:53.864 [2024-10-01 03:44:46.398509] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.864 [2024-10-01 03:44:46.398628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.864 [2024-10-01 03:44:46.398642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:53.864 [2024-10-01 03:44:46.398649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:17:53.864 [2024-10-01 03:44:46.398656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:53.864 [2024-10-01 03:44:46.410217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:53.864 [2024-10-01 03:44:46.410244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:53.864 [2024-10-01 03:44:46.410252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.519 ms 00:17:53.864 [2024-10-01 03:44:46.410258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.420411] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:17:54.123 [2024-10-01 03:44:46.420442] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:17:54.123 [2024-10-01 03:44:46.420452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.420459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:17:54.123 [2024-10-01 03:44:46.420467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.103 ms 00:17:54.123 [2024-10-01 03:44:46.420474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.439333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.439363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:17:54.123 [2024-10-01 03:44:46.439377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.800 ms 00:17:54.123 [2024-10-01 03:44:46.439384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.448408] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.448435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:17:54.123 [2024-10-01 03:44:46.448443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.965 ms 00:17:54.123 [2024-10-01 03:44:46.448449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.457234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.457399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:17:54.123 [2024-10-01 03:44:46.457413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.738 ms 00:17:54.123 [2024-10-01 03:44:46.457420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.457913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.457930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:54.123 [2024-10-01 03:44:46.457938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:17:54.123 [2024-10-01 03:44:46.457945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.506666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.506722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:17:54.123 [2024-10-01 03:44:46.506734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.701 ms 00:17:54.123 [2024-10-01 03:44:46.506741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.515420] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:54.123 [2024-10-01 03:44:46.530348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.530392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:54.123 [2024-10-01 03:44:46.530405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.488 ms 00:17:54.123 [2024-10-01 03:44:46.530412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.530521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.530531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:17:54.123 [2024-10-01 03:44:46.530539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:17:54.123 [2024-10-01 03:44:46.530545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.530598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.530609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:54.123 [2024-10-01 03:44:46.530616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:17:54.123 [2024-10-01 03:44:46.530622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.530641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.530647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:54.123 [2024-10-01 03:44:46.530655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:54.123 [2024-10-01 03:44:46.530661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.530690] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:17:54.123 [2024-10-01 03:44:46.530698] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.530706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:17:54.123 [2024-10-01 03:44:46.530712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:17:54.123 [2024-10-01 03:44:46.530718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.549437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.549472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:54.123 [2024-10-01 03:44:46.549482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.701 ms 00:17:54.123 [2024-10-01 03:44:46.549489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.123 [2024-10-01 03:44:46.549569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:54.123 [2024-10-01 03:44:46.549578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:54.123 [2024-10-01 03:44:46.549586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:17:54.123 [2024-10-01 03:44:46.549593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:54.124 [2024-10-01 03:44:46.550375] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:54.124 [2024-10-01 03:44:46.552732] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 242.670 ms, result 0 00:17:54.124 [2024-10-01 03:44:46.553638] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:54.124 [2024-10-01 03:44:46.568447] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:00.178  Copying: 44/256 [MB] (44 MBps) Copying: 86/256 [MB] (42 MBps) Copying: 129/256 [MB] (42 MBps) Copying: 178/256 [MB] (48 MBps) Copying: 220/256 [MB] (42 MBps) Copying: 256/256 [MB] (average 43 MBps)[2024-10-01 03:44:52.665042] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:00.178 [2024-10-01 03:44:52.675412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.178 [2024-10-01 03:44:52.675671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:00.178 [2024-10-01 03:44:52.675692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:00.178 [2024-10-01 03:44:52.675701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.178 [2024-10-01 03:44:52.675732] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:00.178 [2024-10-01 03:44:52.679427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.178 [2024-10-01 03:44:52.679458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:00.178 [2024-10-01 03:44:52.679469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.679 ms 00:18:00.178 [2024-10-01 03:44:52.679477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.178 [2024-10-01 03:44:52.679763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.178 [2024-10-01 03:44:52.679783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:00.178 [2024-10-01 03:44:52.679792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:18:00.178 [2024-10-01 03:44:52.679799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.178 [2024-10-01 03:44:52.683492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.178 [2024-10-01 03:44:52.683512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:00.178 [2024-10-01 03:44:52.683521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.677 ms 00:18:00.178 [2024-10-01 03:44:52.683529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.178 [2024-10-01 03:44:52.690519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.178 [2024-10-01 03:44:52.690656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:00.178 [2024-10-01 03:44:52.690676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.972 ms 00:18:00.178 [2024-10-01 03:44:52.690685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.178 [2024-10-01 03:44:52.714630] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.178 [2024-10-01 03:44:52.714775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:00.178 [2024-10-01 03:44:52.714792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.880 ms 00:18:00.178 [2024-10-01 03:44:52.714800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.437 [2024-10-01 03:44:52.730067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.437 [2024-10-01 03:44:52.730102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:00.437 [2024-10-01 03:44:52.730114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.241 ms 00:18:00.437 [2024-10-01 03:44:52.730123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.437 [2024-10-01 03:44:52.730266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.437 [2024-10-01 03:44:52.730278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:00.437 [2024-10-01 03:44:52.730287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.093 ms 00:18:00.437 [2024-10-01 03:44:52.730295] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.437 [2024-10-01 03:44:52.755355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.437 [2024-10-01 03:44:52.755388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:00.437 [2024-10-01 03:44:52.755400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.038 ms 00:18:00.437 [2024-10-01 03:44:52.755407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.437 [2024-10-01 03:44:52.778262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.437 [2024-10-01 03:44:52.778295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:00.437 [2024-10-01 03:44:52.778307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.820 ms 00:18:00.437 [2024-10-01 03:44:52.778314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.437 [2024-10-01 03:44:52.800828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.437 [2024-10-01 03:44:52.800863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:00.437 [2024-10-01 03:44:52.800875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.490 ms 00:18:00.437 [2024-10-01 03:44:52.800882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.437 [2024-10-01 03:44:52.823264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.437 [2024-10-01 03:44:52.823295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:00.437 [2024-10-01 03:44:52.823307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.327 ms 00:18:00.437 [2024-10-01 03:44:52.823314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.438 [2024-10-01 03:44:52.823337] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:00.438 [2024-10-01 03:44:52.823352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.823993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.824020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.824029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.824038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.824045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:00.438 [2024-10-01 03:44:52.824053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:00.439 [2024-10-01 03:44:52.824190] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:00.439 [2024-10-01 03:44:52.824198] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 53a1b279-b409-4899-8013-3cb089931240 00:18:00.439 [2024-10-01 03:44:52.824207] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:00.439 [2024-10-01 03:44:52.824214] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:00.439 [2024-10-01 03:44:52.824221] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:00.439 [2024-10-01 03:44:52.824233] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:00.439 [2024-10-01 03:44:52.824241] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:00.439 [2024-10-01 03:44:52.824249] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:00.439 [2024-10-01 03:44:52.824256] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:00.439 [2024-10-01 03:44:52.824262] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:00.439 [2024-10-01 03:44:52.824269] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:00.439 [2024-10-01 03:44:52.824276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.439 [2024-10-01 03:44:52.824284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:00.439 [2024-10-01 03:44:52.824293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.941 ms 00:18:00.439 [2024-10-01 03:44:52.824300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.439 [2024-10-01 03:44:52.837331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.439 [2024-10-01 03:44:52.837373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:00.439 [2024-10-01 03:44:52.837383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.000 ms 00:18:00.439 [2024-10-01 03:44:52.837391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.439 [2024-10-01 03:44:52.837765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:00.439 [2024-10-01 03:44:52.837782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:00.439 [2024-10-01 03:44:52.837792] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.337 ms 00:18:00.439 [2024-10-01 03:44:52.837799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.439 [2024-10-01 03:44:52.869885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.439 [2024-10-01 03:44:52.869932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:00.439 [2024-10-01 03:44:52.869944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.439 [2024-10-01 03:44:52.869953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.439 [2024-10-01 03:44:52.870073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.439 [2024-10-01 03:44:52.870084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:00.439 [2024-10-01 03:44:52.870092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.439 [2024-10-01 03:44:52.870100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.439 [2024-10-01 03:44:52.870145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.439 [2024-10-01 03:44:52.870159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:00.439 [2024-10-01 03:44:52.870168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.439 [2024-10-01 03:44:52.870175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.439 [2024-10-01 03:44:52.870193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.439 [2024-10-01 03:44:52.870202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:00.439 [2024-10-01 03:44:52.870210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.439 [2024-10-01 03:44:52.870218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.439 [2024-10-01 03:44:52.951626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.439 [2024-10-01 03:44:52.951891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:00.439 [2024-10-01 03:44:52.951911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.439 [2024-10-01 03:44:52.951919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.698 [2024-10-01 03:44:53.018705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.698 [2024-10-01 03:44:53.018947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:00.698 [2024-10-01 03:44:53.018964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.698 [2024-10-01 03:44:53.018972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.698 [2024-10-01 03:44:53.019082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.698 [2024-10-01 03:44:53.019092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:00.698 [2024-10-01 03:44:53.019104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.698 [2024-10-01 03:44:53.019112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.698 [2024-10-01 03:44:53.019143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.698 [2024-10-01 03:44:53.019152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:00.698 [2024-10-01 03:44:53.019160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.698 [2024-10-01 03:44:53.019168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.698 [2024-10-01 03:44:53.019273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.698 [2024-10-01 03:44:53.019284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:00.698 [2024-10-01 03:44:53.019292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.698 [2024-10-01 03:44:53.019302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.698 [2024-10-01 03:44:53.019338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.698 [2024-10-01 03:44:53.019348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:00.698 [2024-10-01 03:44:53.019357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.698 [2024-10-01 03:44:53.019365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.698 [2024-10-01 03:44:53.019404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.698 [2024-10-01 03:44:53.019413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:00.698 [2024-10-01 03:44:53.019422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.698 [2024-10-01 03:44:53.019433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.698 [2024-10-01 03:44:53.019479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:00.698 [2024-10-01 03:44:53.019489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:00.698 [2024-10-01 03:44:53.019497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:00.698 [2024-10-01 03:44:53.019506] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:00.699 [2024-10-01 03:44:53.019655] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 344.241 ms, result 0 00:18:01.635 00:18:01.635 00:18:01.635 03:44:53 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:01.893 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:18:01.893 03:44:54 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:01.893 03:44:54 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:18:01.893 03:44:54 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:18:01.893 03:44:54 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:01.893 03:44:54 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:18:01.893 03:44:54 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:18:02.152 03:44:54 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 74524 00:18:02.152 03:44:54 ftl.ftl_trim -- common/autotest_common.sh@950 -- # '[' -z 74524 ']' 00:18:02.152 03:44:54 ftl.ftl_trim -- common/autotest_common.sh@954 -- # kill -0 74524 00:18:02.152 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74524) - No such process 00:18:02.152 Process with pid 74524 is not found 00:18:02.152 03:44:54 ftl.ftl_trim -- common/autotest_common.sh@977 -- # echo 'Process with pid 74524 is not found' 00:18:02.152 00:18:02.152 real 0m48.722s 00:18:02.152 user 1m5.553s 00:18:02.152 sys 0m13.644s 00:18:02.152 ************************************ 00:18:02.152 END TEST ftl_trim 00:18:02.152 ************************************ 00:18:02.152 03:44:54 ftl.ftl_trim -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:02.152 03:44:54 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:02.152 03:44:54 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:02.152 03:44:54 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:02.152 03:44:54 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:02.152 03:44:54 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:02.152 ************************************ 00:18:02.152 START TEST ftl_restore 00:18:02.152 ************************************ 00:18:02.152 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:18:02.152 * Looking for test storage... 00:18:02.152 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:02.152 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:18:02.152 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lcov --version 00:18:02.152 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:18:02.152 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:18:02.152 03:44:54 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:02.152 03:44:54 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:02.152 03:44:54 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:02.152 03:44:54 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.152 03:44:54 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:18:02.152 03:44:54 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:18:02.152 03:44:54 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:18:02.152 03:44:54 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:18:02.152 03:44:54 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:02.153 03:44:54 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:18:02.153 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.153 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:18:02.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.153 --rc genhtml_branch_coverage=1 00:18:02.153 --rc genhtml_function_coverage=1 00:18:02.153 --rc genhtml_legend=1 00:18:02.153 --rc geninfo_all_blocks=1 00:18:02.153 --rc geninfo_unexecuted_blocks=1 00:18:02.153 00:18:02.153 ' 00:18:02.153 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:18:02.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.153 --rc genhtml_branch_coverage=1 00:18:02.153 --rc genhtml_function_coverage=1 00:18:02.153 --rc genhtml_legend=1 00:18:02.153 --rc geninfo_all_blocks=1 00:18:02.153 --rc geninfo_unexecuted_blocks=1 00:18:02.153 00:18:02.153 ' 00:18:02.153 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:18:02.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.153 --rc genhtml_branch_coverage=1 00:18:02.153 --rc genhtml_function_coverage=1 00:18:02.153 --rc genhtml_legend=1 00:18:02.153 --rc geninfo_all_blocks=1 00:18:02.153 --rc geninfo_unexecuted_blocks=1 00:18:02.153 00:18:02.153 ' 00:18:02.153 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:18:02.153 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.153 --rc genhtml_branch_coverage=1 00:18:02.153 --rc genhtml_function_coverage=1 00:18:02.153 --rc genhtml_legend=1 00:18:02.153 --rc geninfo_all_blocks=1 00:18:02.153 --rc geninfo_unexecuted_blocks=1 00:18:02.153 00:18:02.153 ' 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.y3UMbxXLXr 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=74743 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 74743 00:18:02.153 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@831 -- # '[' -z 74743 ']' 00:18:02.153 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.153 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:02.153 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.153 03:44:54 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:02.153 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:02.153 03:44:54 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:18:02.412 [2024-10-01 03:44:54.734438] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:02.412 [2024-10-01 03:44:54.734754] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74743 ] 00:18:02.412 [2024-10-01 03:44:54.884579] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.671 [2024-10-01 03:44:55.097638] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:03.238 03:44:55 ftl.ftl_restore -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.238 03:44:55 ftl.ftl_restore -- common/autotest_common.sh@864 -- # return 0 00:18:03.238 03:44:55 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:03.238 03:44:55 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:18:03.238 03:44:55 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:03.238 03:44:55 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:18:03.238 03:44:55 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:18:03.238 03:44:55 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:03.497 03:44:56 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:03.497 03:44:56 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:18:03.497 03:44:56 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:03.497 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:18:03.497 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:03.497 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:03.497 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:03.497 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:03.755 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:03.755 { 00:18:03.755 "name": "nvme0n1", 00:18:03.755 "aliases": [ 00:18:03.755 "56748a7e-3963-42e8-8850-731d06584559" 00:18:03.755 ], 00:18:03.755 "product_name": "NVMe disk", 00:18:03.755 "block_size": 4096, 00:18:03.755 "num_blocks": 1310720, 00:18:03.755 "uuid": "56748a7e-3963-42e8-8850-731d06584559", 00:18:03.755 "numa_id": -1, 00:18:03.755 "assigned_rate_limits": { 00:18:03.755 "rw_ios_per_sec": 0, 00:18:03.755 "rw_mbytes_per_sec": 0, 00:18:03.755 "r_mbytes_per_sec": 0, 00:18:03.755 "w_mbytes_per_sec": 0 00:18:03.755 }, 00:18:03.755 "claimed": true, 00:18:03.755 "claim_type": "read_many_write_one", 00:18:03.755 "zoned": false, 00:18:03.755 "supported_io_types": { 00:18:03.755 "read": true, 00:18:03.755 "write": true, 00:18:03.755 "unmap": true, 00:18:03.755 "flush": true, 00:18:03.755 "reset": true, 00:18:03.755 "nvme_admin": true, 00:18:03.755 "nvme_io": true, 00:18:03.755 "nvme_io_md": false, 00:18:03.755 "write_zeroes": true, 00:18:03.755 "zcopy": false, 00:18:03.755 "get_zone_info": false, 00:18:03.755 "zone_management": false, 00:18:03.755 "zone_append": false, 00:18:03.755 "compare": true, 00:18:03.755 "compare_and_write": false, 00:18:03.755 "abort": true, 00:18:03.755 "seek_hole": false, 00:18:03.755 "seek_data": false, 00:18:03.755 "copy": true, 00:18:03.755 "nvme_iov_md": false 00:18:03.755 }, 00:18:03.755 "driver_specific": { 00:18:03.755 "nvme": [ 00:18:03.755 { 00:18:03.755 "pci_address": "0000:00:11.0", 00:18:03.755 "trid": { 00:18:03.755 "trtype": "PCIe", 00:18:03.755 "traddr": "0000:00:11.0" 00:18:03.755 }, 00:18:03.755 "ctrlr_data": { 00:18:03.755 "cntlid": 0, 00:18:03.755 "vendor_id": "0x1b36", 00:18:03.755 "model_number": "QEMU NVMe Ctrl", 00:18:03.755 "serial_number": "12341", 00:18:03.755 "firmware_revision": "8.0.0", 00:18:03.755 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:03.755 "oacs": { 00:18:03.755 "security": 0, 00:18:03.755 "format": 1, 00:18:03.755 "firmware": 0, 00:18:03.755 "ns_manage": 1 00:18:03.755 }, 00:18:03.755 "multi_ctrlr": false, 00:18:03.755 "ana_reporting": false 00:18:03.755 }, 00:18:03.755 "vs": { 00:18:03.755 "nvme_version": "1.4" 00:18:03.755 }, 00:18:03.755 "ns_data": { 00:18:03.755 "id": 1, 00:18:03.755 "can_share": false 00:18:03.755 } 00:18:03.755 } 00:18:03.755 ], 00:18:03.755 "mp_policy": "active_passive" 00:18:03.755 } 00:18:03.755 } 00:18:03.755 ]' 00:18:03.755 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:03.755 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:03.755 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:03.755 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=1310720 00:18:03.755 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:18:03.755 03:44:56 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 5120 00:18:03.755 03:44:56 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:18:03.755 03:44:56 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:03.755 03:44:56 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:18:03.755 03:44:56 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:03.755 03:44:56 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:04.013 03:44:56 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=5a6b6fcb-b1ea-4dd1-a9aa-323ef715e8aa 00:18:04.013 03:44:56 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:18:04.013 03:44:56 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a6b6fcb-b1ea-4dd1-a9aa-323ef715e8aa 00:18:04.271 03:44:56 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:04.528 03:44:56 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=a1dfdd74-0b64-4beb-8888-2f9dc3972e0b 00:18:04.528 03:44:56 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u a1dfdd74-0b64-4beb-8888-2f9dc3972e0b 00:18:04.787 03:44:57 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:04.787 03:44:57 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:18:04.787 03:44:57 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:04.787 03:44:57 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:18:04.787 03:44:57 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:04.787 03:44:57 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:04.787 03:44:57 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:18:04.787 03:44:57 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:04.787 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:04.787 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:04.787 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:04.787 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:04.787 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:05.045 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:05.045 { 00:18:05.045 "name": "a946cb4f-c003-4435-8d31-4fbecca425ff", 00:18:05.045 "aliases": [ 00:18:05.045 "lvs/nvme0n1p0" 00:18:05.045 ], 00:18:05.045 "product_name": "Logical Volume", 00:18:05.045 "block_size": 4096, 00:18:05.045 "num_blocks": 26476544, 00:18:05.045 "uuid": "a946cb4f-c003-4435-8d31-4fbecca425ff", 00:18:05.045 "assigned_rate_limits": { 00:18:05.045 "rw_ios_per_sec": 0, 00:18:05.045 "rw_mbytes_per_sec": 0, 00:18:05.045 "r_mbytes_per_sec": 0, 00:18:05.045 "w_mbytes_per_sec": 0 00:18:05.045 }, 00:18:05.045 "claimed": false, 00:18:05.045 "zoned": false, 00:18:05.045 "supported_io_types": { 00:18:05.045 "read": true, 00:18:05.045 "write": true, 00:18:05.045 "unmap": true, 00:18:05.045 "flush": false, 00:18:05.045 "reset": true, 00:18:05.045 "nvme_admin": false, 00:18:05.045 "nvme_io": false, 00:18:05.045 "nvme_io_md": false, 00:18:05.045 "write_zeroes": true, 00:18:05.045 "zcopy": false, 00:18:05.045 "get_zone_info": false, 00:18:05.045 "zone_management": false, 00:18:05.045 "zone_append": false, 00:18:05.045 "compare": false, 00:18:05.045 "compare_and_write": false, 00:18:05.045 "abort": false, 00:18:05.045 "seek_hole": true, 00:18:05.045 "seek_data": true, 00:18:05.045 "copy": false, 00:18:05.045 "nvme_iov_md": false 00:18:05.045 }, 00:18:05.045 "driver_specific": { 00:18:05.045 "lvol": { 00:18:05.045 "lvol_store_uuid": "a1dfdd74-0b64-4beb-8888-2f9dc3972e0b", 00:18:05.045 "base_bdev": "nvme0n1", 00:18:05.045 "thin_provision": true, 00:18:05.045 "num_allocated_clusters": 0, 00:18:05.045 "snapshot": false, 00:18:05.045 "clone": false, 00:18:05.045 "esnap_clone": false 00:18:05.045 } 00:18:05.045 } 00:18:05.045 } 00:18:05.045 ]' 00:18:05.045 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:05.045 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:05.045 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:05.045 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:05.045 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:05.045 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:05.045 03:44:57 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:18:05.045 03:44:57 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:18:05.045 03:44:57 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:05.304 03:44:57 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:05.304 03:44:57 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:05.304 03:44:57 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:05.304 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:05.304 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:05.304 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:05.304 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:05.304 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:05.304 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:05.304 { 00:18:05.304 "name": "a946cb4f-c003-4435-8d31-4fbecca425ff", 00:18:05.304 "aliases": [ 00:18:05.304 "lvs/nvme0n1p0" 00:18:05.304 ], 00:18:05.304 "product_name": "Logical Volume", 00:18:05.304 "block_size": 4096, 00:18:05.304 "num_blocks": 26476544, 00:18:05.304 "uuid": "a946cb4f-c003-4435-8d31-4fbecca425ff", 00:18:05.304 "assigned_rate_limits": { 00:18:05.304 "rw_ios_per_sec": 0, 00:18:05.304 "rw_mbytes_per_sec": 0, 00:18:05.304 "r_mbytes_per_sec": 0, 00:18:05.304 "w_mbytes_per_sec": 0 00:18:05.304 }, 00:18:05.304 "claimed": false, 00:18:05.304 "zoned": false, 00:18:05.304 "supported_io_types": { 00:18:05.304 "read": true, 00:18:05.304 "write": true, 00:18:05.304 "unmap": true, 00:18:05.304 "flush": false, 00:18:05.304 "reset": true, 00:18:05.304 "nvme_admin": false, 00:18:05.304 "nvme_io": false, 00:18:05.304 "nvme_io_md": false, 00:18:05.304 "write_zeroes": true, 00:18:05.304 "zcopy": false, 00:18:05.304 "get_zone_info": false, 00:18:05.304 "zone_management": false, 00:18:05.304 "zone_append": false, 00:18:05.304 "compare": false, 00:18:05.304 "compare_and_write": false, 00:18:05.304 "abort": false, 00:18:05.304 "seek_hole": true, 00:18:05.304 "seek_data": true, 00:18:05.304 "copy": false, 00:18:05.304 "nvme_iov_md": false 00:18:05.304 }, 00:18:05.304 "driver_specific": { 00:18:05.304 "lvol": { 00:18:05.304 "lvol_store_uuid": "a1dfdd74-0b64-4beb-8888-2f9dc3972e0b", 00:18:05.304 "base_bdev": "nvme0n1", 00:18:05.304 "thin_provision": true, 00:18:05.304 "num_allocated_clusters": 0, 00:18:05.304 "snapshot": false, 00:18:05.304 "clone": false, 00:18:05.304 "esnap_clone": false 00:18:05.304 } 00:18:05.304 } 00:18:05.304 } 00:18:05.304 ]' 00:18:05.304 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:05.304 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:05.304 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:05.563 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:05.563 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:05.563 03:44:57 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:05.563 03:44:57 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:18:05.563 03:44:57 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:05.563 03:44:58 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:18:05.563 03:44:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:05.563 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1378 -- # local bdev_name=a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:05.563 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1379 -- # local bdev_info 00:18:05.563 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bs 00:18:05.563 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local nb 00:18:05.563 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a946cb4f-c003-4435-8d31-4fbecca425ff 00:18:05.832 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:18:05.832 { 00:18:05.832 "name": "a946cb4f-c003-4435-8d31-4fbecca425ff", 00:18:05.832 "aliases": [ 00:18:05.832 "lvs/nvme0n1p0" 00:18:05.832 ], 00:18:05.832 "product_name": "Logical Volume", 00:18:05.832 "block_size": 4096, 00:18:05.832 "num_blocks": 26476544, 00:18:05.832 "uuid": "a946cb4f-c003-4435-8d31-4fbecca425ff", 00:18:05.832 "assigned_rate_limits": { 00:18:05.832 "rw_ios_per_sec": 0, 00:18:05.832 "rw_mbytes_per_sec": 0, 00:18:05.832 "r_mbytes_per_sec": 0, 00:18:05.832 "w_mbytes_per_sec": 0 00:18:05.832 }, 00:18:05.832 "claimed": false, 00:18:05.832 "zoned": false, 00:18:05.832 "supported_io_types": { 00:18:05.832 "read": true, 00:18:05.832 "write": true, 00:18:05.832 "unmap": true, 00:18:05.832 "flush": false, 00:18:05.832 "reset": true, 00:18:05.832 "nvme_admin": false, 00:18:05.832 "nvme_io": false, 00:18:05.832 "nvme_io_md": false, 00:18:05.832 "write_zeroes": true, 00:18:05.832 "zcopy": false, 00:18:05.832 "get_zone_info": false, 00:18:05.832 "zone_management": false, 00:18:05.832 "zone_append": false, 00:18:05.832 "compare": false, 00:18:05.832 "compare_and_write": false, 00:18:05.832 "abort": false, 00:18:05.832 "seek_hole": true, 00:18:05.832 "seek_data": true, 00:18:05.832 "copy": false, 00:18:05.832 "nvme_iov_md": false 00:18:05.832 }, 00:18:05.832 "driver_specific": { 00:18:05.832 "lvol": { 00:18:05.832 "lvol_store_uuid": "a1dfdd74-0b64-4beb-8888-2f9dc3972e0b", 00:18:05.832 "base_bdev": "nvme0n1", 00:18:05.832 "thin_provision": true, 00:18:05.832 "num_allocated_clusters": 0, 00:18:05.832 "snapshot": false, 00:18:05.832 "clone": false, 00:18:05.832 "esnap_clone": false 00:18:05.832 } 00:18:05.832 } 00:18:05.832 } 00:18:05.832 ]' 00:18:05.832 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:18:05.832 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # bs=4096 00:18:05.832 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:18:05.832 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # nb=26476544 00:18:05.832 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:18:05.832 03:44:58 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # echo 103424 00:18:05.832 03:44:58 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:18:05.832 03:44:58 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d a946cb4f-c003-4435-8d31-4fbecca425ff --l2p_dram_limit 10' 00:18:05.832 03:44:58 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:18:05.832 03:44:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:18:05.832 03:44:58 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:18:05.832 03:44:58 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:18:05.832 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:18:05.832 03:44:58 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a946cb4f-c003-4435-8d31-4fbecca425ff --l2p_dram_limit 10 -c nvc0n1p0 00:18:06.112 [2024-10-01 03:44:58.496126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.112 [2024-10-01 03:44:58.496193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:06.112 [2024-10-01 03:44:58.496208] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:06.112 [2024-10-01 03:44:58.496215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.112 [2024-10-01 03:44:58.496269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.112 [2024-10-01 03:44:58.496277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:06.112 [2024-10-01 03:44:58.496286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:18:06.112 [2024-10-01 03:44:58.496308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.112 [2024-10-01 03:44:58.496332] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:06.112 [2024-10-01 03:44:58.496981] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:06.112 [2024-10-01 03:44:58.497167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.112 [2024-10-01 03:44:58.497334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:06.112 [2024-10-01 03:44:58.497391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.841 ms 00:18:06.112 [2024-10-01 03:44:58.497431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.112 [2024-10-01 03:44:58.497534] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 3bc97d54-35af-4bbe-b537-d54a52a02875 00:18:06.112 [2024-10-01 03:44:58.498995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.112 [2024-10-01 03:44:58.499105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:06.112 [2024-10-01 03:44:58.499155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:18:06.112 [2024-10-01 03:44:58.499166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.112 [2024-10-01 03:44:58.506241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.112 [2024-10-01 03:44:58.506382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:06.112 [2024-10-01 03:44:58.506484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.024 ms 00:18:06.112 [2024-10-01 03:44:58.506584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.112 [2024-10-01 03:44:58.506700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.112 [2024-10-01 03:44:58.506850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:06.112 [2024-10-01 03:44:58.506901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:06.112 [2024-10-01 03:44:58.506955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.112 [2024-10-01 03:44:58.507074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.112 [2024-10-01 03:44:58.507131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:06.112 [2024-10-01 03:44:58.507245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:06.112 [2024-10-01 03:44:58.507298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.112 [2024-10-01 03:44:58.507395] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:06.112 [2024-10-01 03:44:58.510722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.112 [2024-10-01 03:44:58.510865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:06.112 [2024-10-01 03:44:58.510961] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.334 ms 00:18:06.112 [2024-10-01 03:44:58.511033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.112 [2024-10-01 03:44:58.511128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.112 [2024-10-01 03:44:58.511210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:06.112 [2024-10-01 03:44:58.511264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:06.112 [2024-10-01 03:44:58.511378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.112 [2024-10-01 03:44:58.511450] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:06.112 [2024-10-01 03:44:58.511718] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:06.112 [2024-10-01 03:44:58.511821] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:06.112 [2024-10-01 03:44:58.511906] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:06.112 [2024-10-01 03:44:58.512028] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:06.112 [2024-10-01 03:44:58.512113] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:06.112 [2024-10-01 03:44:58.512219] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:06.112 [2024-10-01 03:44:58.512297] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:06.112 [2024-10-01 03:44:58.512376] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:06.112 [2024-10-01 03:44:58.512419] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:06.112 [2024-10-01 03:44:58.512496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-10-01 03:44:58.512546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:06.113 [2024-10-01 03:44:58.512594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.048 ms 00:18:06.113 [2024-10-01 03:44:58.512642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-10-01 03:44:58.512752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-10-01 03:44:58.512804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:06.113 [2024-10-01 03:44:58.512851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:18:06.113 [2024-10-01 03:44:58.512935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-10-01 03:44:58.513119] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:06.113 [2024-10-01 03:44:58.513207] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:06.113 [2024-10-01 03:44:58.513259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.113 [2024-10-01 03:44:58.513344] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-10-01 03:44:58.513394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:06.113 [2024-10-01 03:44:58.513482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:06.113 [2024-10-01 03:44:58.513531] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:06.113 [2024-10-01 03:44:58.513607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:06.113 [2024-10-01 03:44:58.513653] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:06.113 [2024-10-01 03:44:58.513694] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.113 [2024-10-01 03:44:58.513748] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:06.113 [2024-10-01 03:44:58.513835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:06.113 [2024-10-01 03:44:58.513881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:06.113 [2024-10-01 03:44:58.513919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:06.113 [2024-10-01 03:44:58.514023] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:06.113 [2024-10-01 03:44:58.514079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-10-01 03:44:58.514122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:06.113 [2024-10-01 03:44:58.514222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:06.113 [2024-10-01 03:44:58.514266] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-10-01 03:44:58.514301] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:06.113 [2024-10-01 03:44:58.514406] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:06.113 [2024-10-01 03:44:58.514444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.113 [2024-10-01 03:44:58.514489] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:06.113 [2024-10-01 03:44:58.514523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:06.113 [2024-10-01 03:44:58.514556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.113 [2024-10-01 03:44:58.514583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:06.113 [2024-10-01 03:44:58.514614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:06.113 [2024-10-01 03:44:58.514710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.113 [2024-10-01 03:44:58.514751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:06.113 [2024-10-01 03:44:58.514789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:06.113 [2024-10-01 03:44:58.514824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:06.113 [2024-10-01 03:44:58.514848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:06.113 [2024-10-01 03:44:58.514882] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:06.113 [2024-10-01 03:44:58.514913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.113 [2024-10-01 03:44:58.514997] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:06.113 [2024-10-01 03:44:58.515061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:06.113 [2024-10-01 03:44:58.515094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:06.113 [2024-10-01 03:44:58.515121] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:06.113 [2024-10-01 03:44:58.515152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:06.113 [2024-10-01 03:44:58.515180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-10-01 03:44:58.515227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:06.113 [2024-10-01 03:44:58.515261] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:06.113 [2024-10-01 03:44:58.515293] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-10-01 03:44:58.515333] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:06.113 [2024-10-01 03:44:58.515371] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:06.113 [2024-10-01 03:44:58.515457] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:06.113 [2024-10-01 03:44:58.515491] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:06.113 [2024-10-01 03:44:58.515519] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:06.113 [2024-10-01 03:44:58.515554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:06.113 [2024-10-01 03:44:58.515581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:06.113 [2024-10-01 03:44:58.515673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:06.113 [2024-10-01 03:44:58.515713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:06.113 [2024-10-01 03:44:58.515745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:06.113 [2024-10-01 03:44:58.515778] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:06.113 [2024-10-01 03:44:58.515813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.113 [2024-10-01 03:44:58.515847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:06.113 [2024-10-01 03:44:58.515937] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:06.113 [2024-10-01 03:44:58.515978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:06.113 [2024-10-01 03:44:58.516026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:06.113 [2024-10-01 03:44:58.516062] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:06.113 [2024-10-01 03:44:58.516093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:06.113 [2024-10-01 03:44:58.516125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:06.113 [2024-10-01 03:44:58.516211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:06.113 [2024-10-01 03:44:58.516250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:06.113 [2024-10-01 03:44:58.516285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:06.113 [2024-10-01 03:44:58.516310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:06.113 [2024-10-01 03:44:58.516340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:06.113 [2024-10-01 03:44:58.516431] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:06.113 [2024-10-01 03:44:58.516478] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:06.113 [2024-10-01 03:44:58.516512] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:06.113 [2024-10-01 03:44:58.516550] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:06.113 [2024-10-01 03:44:58.516578] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:06.113 [2024-10-01 03:44:58.516611] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:06.113 [2024-10-01 03:44:58.516644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:06.113 [2024-10-01 03:44:58.516727] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:06.113 [2024-10-01 03:44:58.516772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:06.113 [2024-10-01 03:44:58.516803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:06.113 [2024-10-01 03:44:58.516837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.712 ms 00:18:06.113 [2024-10-01 03:44:58.516880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:06.113 [2024-10-01 03:44:58.517016] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:06.113 [2024-10-01 03:44:58.517080] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:08.655 [2024-10-01 03:45:00.705469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.705546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:08.655 [2024-10-01 03:45:00.705563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2188.441 ms 00:18:08.655 [2024-10-01 03:45:00.705574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.734093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.734348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:08.655 [2024-10-01 03:45:00.734369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.295 ms 00:18:08.655 [2024-10-01 03:45:00.734380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.734571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.734587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:08.655 [2024-10-01 03:45:00.734596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:18:08.655 [2024-10-01 03:45:00.734612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.774827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.774884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:08.655 [2024-10-01 03:45:00.774903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.176 ms 00:18:08.655 [2024-10-01 03:45:00.774913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.774969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.774980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:08.655 [2024-10-01 03:45:00.774989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:08.655 [2024-10-01 03:45:00.775025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.775491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.775579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:08.655 [2024-10-01 03:45:00.775592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:18:08.655 [2024-10-01 03:45:00.775605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.775733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.775746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:08.655 [2024-10-01 03:45:00.775754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.105 ms 00:18:08.655 [2024-10-01 03:45:00.775767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.790779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.790996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:08.655 [2024-10-01 03:45:00.791033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.992 ms 00:18:08.655 [2024-10-01 03:45:00.791043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.803395] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:08.655 [2024-10-01 03:45:00.806632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.806662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:08.655 [2024-10-01 03:45:00.806679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.493 ms 00:18:08.655 [2024-10-01 03:45:00.806687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.870030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.870093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:08.655 [2024-10-01 03:45:00.870115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.307 ms 00:18:08.655 [2024-10-01 03:45:00.870124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.870327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.870339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:08.655 [2024-10-01 03:45:00.870353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:18:08.655 [2024-10-01 03:45:00.870361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.893845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.894082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:08.655 [2024-10-01 03:45:00.894109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.434 ms 00:18:08.655 [2024-10-01 03:45:00.894118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.916762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.916806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:08.655 [2024-10-01 03:45:00.916822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.373 ms 00:18:08.655 [2024-10-01 03:45:00.916830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.917451] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.917473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:08.655 [2024-10-01 03:45:00.917485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.583 ms 00:18:08.655 [2024-10-01 03:45:00.917493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:00.985756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:00.985802] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:08.655 [2024-10-01 03:45:00.985821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.226 ms 00:18:08.655 [2024-10-01 03:45:00.985832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:01.010661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:01.010706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:08.655 [2024-10-01 03:45:01.010720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.766 ms 00:18:08.655 [2024-10-01 03:45:01.010729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:01.034265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:01.034309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:08.655 [2024-10-01 03:45:01.034323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.506 ms 00:18:08.655 [2024-10-01 03:45:01.034331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:01.057857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:01.057904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:08.655 [2024-10-01 03:45:01.057919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.496 ms 00:18:08.655 [2024-10-01 03:45:01.057927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:01.057957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:01.057966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:08.655 [2024-10-01 03:45:01.057982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:08.655 [2024-10-01 03:45:01.057990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:01.058087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:08.655 [2024-10-01 03:45:01.058098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:08.655 [2024-10-01 03:45:01.058109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:08.655 [2024-10-01 03:45:01.058117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:08.655 [2024-10-01 03:45:01.059240] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2562.643 ms, result 0 00:18:08.655 { 00:18:08.655 "name": "ftl0", 00:18:08.655 "uuid": "3bc97d54-35af-4bbe-b537-d54a52a02875" 00:18:08.655 } 00:18:08.655 03:45:01 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:18:08.655 03:45:01 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:08.915 03:45:01 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:18:08.915 03:45:01 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:09.175 [2024-10-01 03:45:01.478645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.478713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:09.175 [2024-10-01 03:45:01.478726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:09.175 [2024-10-01 03:45:01.478736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.478761] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:09.175 [2024-10-01 03:45:01.481548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.481581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:09.175 [2024-10-01 03:45:01.481603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.768 ms 00:18:09.175 [2024-10-01 03:45:01.481612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.481896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.481907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:09.175 [2024-10-01 03:45:01.481918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.255 ms 00:18:09.175 [2024-10-01 03:45:01.481927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.485179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.485201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:09.175 [2024-10-01 03:45:01.485213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.235 ms 00:18:09.175 [2024-10-01 03:45:01.485223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.491396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.491438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:09.175 [2024-10-01 03:45:01.491450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.153 ms 00:18:09.175 [2024-10-01 03:45:01.491459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.515693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.515726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:09.175 [2024-10-01 03:45:01.515739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.175 ms 00:18:09.175 [2024-10-01 03:45:01.515747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.531226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.531426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:09.175 [2024-10-01 03:45:01.531447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.437 ms 00:18:09.175 [2024-10-01 03:45:01.531456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.531608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.531622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:09.175 [2024-10-01 03:45:01.531633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:18:09.175 [2024-10-01 03:45:01.531641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.554536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.554677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:09.175 [2024-10-01 03:45:01.554696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.874 ms 00:18:09.175 [2024-10-01 03:45:01.554704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.577281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.577402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:09.175 [2024-10-01 03:45:01.577420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.543 ms 00:18:09.175 [2024-10-01 03:45:01.577428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.600109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.600229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:09.175 [2024-10-01 03:45:01.600247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.643 ms 00:18:09.175 [2024-10-01 03:45:01.600254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.622195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.175 [2024-10-01 03:45:01.622227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:09.175 [2024-10-01 03:45:01.622239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.865 ms 00:18:09.175 [2024-10-01 03:45:01.622247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.175 [2024-10-01 03:45:01.622284] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:09.175 [2024-10-01 03:45:01.622299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:09.175 [2024-10-01 03:45:01.622685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.622995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623136] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:09.176 [2024-10-01 03:45:01.623426] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:09.176 [2024-10-01 03:45:01.623440] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3bc97d54-35af-4bbe-b537-d54a52a02875 00:18:09.176 [2024-10-01 03:45:01.623447] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:09.176 [2024-10-01 03:45:01.623459] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:09.176 [2024-10-01 03:45:01.623466] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:09.176 [2024-10-01 03:45:01.623476] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:09.176 [2024-10-01 03:45:01.623482] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:09.176 [2024-10-01 03:45:01.623491] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:09.176 [2024-10-01 03:45:01.623500] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:09.176 [2024-10-01 03:45:01.623509] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:09.176 [2024-10-01 03:45:01.623516] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:09.176 [2024-10-01 03:45:01.623525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.176 [2024-10-01 03:45:01.623533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:09.176 [2024-10-01 03:45:01.623542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:18:09.176 [2024-10-01 03:45:01.623549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.176 [2024-10-01 03:45:01.636707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.176 [2024-10-01 03:45:01.636741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:09.176 [2024-10-01 03:45:01.636755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.107 ms 00:18:09.176 [2024-10-01 03:45:01.636763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.176 [2024-10-01 03:45:01.637168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:09.176 [2024-10-01 03:45:01.637179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:09.176 [2024-10-01 03:45:01.637189] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.360 ms 00:18:09.176 [2024-10-01 03:45:01.637197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.176 [2024-10-01 03:45:01.675808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.176 [2024-10-01 03:45:01.675853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:09.176 [2024-10-01 03:45:01.675867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.176 [2024-10-01 03:45:01.675878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.176 [2024-10-01 03:45:01.675948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.176 [2024-10-01 03:45:01.675957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:09.176 [2024-10-01 03:45:01.675966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.176 [2024-10-01 03:45:01.675974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.176 [2024-10-01 03:45:01.676071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.176 [2024-10-01 03:45:01.676082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:09.176 [2024-10-01 03:45:01.676092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.176 [2024-10-01 03:45:01.676099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.176 [2024-10-01 03:45:01.676125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.176 [2024-10-01 03:45:01.676133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:09.176 [2024-10-01 03:45:01.676143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.176 [2024-10-01 03:45:01.676150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.435 [2024-10-01 03:45:01.755552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.435 [2024-10-01 03:45:01.755615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:09.435 [2024-10-01 03:45:01.755629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.435 [2024-10-01 03:45:01.755637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.435 [2024-10-01 03:45:01.820553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.435 [2024-10-01 03:45:01.820614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:09.435 [2024-10-01 03:45:01.820629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.435 [2024-10-01 03:45:01.820637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.435 [2024-10-01 03:45:01.820747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.435 [2024-10-01 03:45:01.820757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:09.435 [2024-10-01 03:45:01.820767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.435 [2024-10-01 03:45:01.820774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.435 [2024-10-01 03:45:01.820827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.435 [2024-10-01 03:45:01.820839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:09.435 [2024-10-01 03:45:01.820849] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.435 [2024-10-01 03:45:01.820857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.435 [2024-10-01 03:45:01.820951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.435 [2024-10-01 03:45:01.820960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:09.435 [2024-10-01 03:45:01.820969] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.436 [2024-10-01 03:45:01.820977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.436 [2024-10-01 03:45:01.821035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.436 [2024-10-01 03:45:01.821046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:09.436 [2024-10-01 03:45:01.821058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.436 [2024-10-01 03:45:01.821066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.436 [2024-10-01 03:45:01.821109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.436 [2024-10-01 03:45:01.821119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:09.436 [2024-10-01 03:45:01.821128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.436 [2024-10-01 03:45:01.821136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.436 [2024-10-01 03:45:01.821188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:09.436 [2024-10-01 03:45:01.821201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:09.436 [2024-10-01 03:45:01.821212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:09.436 [2024-10-01 03:45:01.821219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:09.436 [2024-10-01 03:45:01.821358] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 342.680 ms, result 0 00:18:09.436 true 00:18:09.436 03:45:01 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 74743 00:18:09.436 03:45:01 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74743 ']' 00:18:09.436 03:45:01 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74743 00:18:09.436 03:45:01 ftl.ftl_restore -- common/autotest_common.sh@955 -- # uname 00:18:09.436 03:45:01 ftl.ftl_restore -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:09.436 03:45:01 ftl.ftl_restore -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74743 00:18:09.436 killing process with pid 74743 00:18:09.436 03:45:01 ftl.ftl_restore -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:09.436 03:45:01 ftl.ftl_restore -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:09.436 03:45:01 ftl.ftl_restore -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74743' 00:18:09.436 03:45:01 ftl.ftl_restore -- common/autotest_common.sh@969 -- # kill 74743 00:18:09.436 03:45:01 ftl.ftl_restore -- common/autotest_common.sh@974 -- # wait 74743 00:18:15.993 03:45:08 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:18:20.178 262144+0 records in 00:18:20.178 262144+0 records out 00:18:20.178 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.80293 s, 282 MB/s 00:18:20.178 03:45:11 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:18:21.554 03:45:14 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:21.812 [2024-10-01 03:45:14.150366] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:21.812 [2024-10-01 03:45:14.150466] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74963 ] 00:18:21.812 [2024-10-01 03:45:14.293572] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.070 [2024-10-01 03:45:14.508046] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.329 [2024-10-01 03:45:14.780102] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.329 [2024-10-01 03:45:14.780179] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:22.588 [2024-10-01 03:45:14.934705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.588 [2024-10-01 03:45:14.934766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:22.588 [2024-10-01 03:45:14.934781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:22.588 [2024-10-01 03:45:14.934794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.588 [2024-10-01 03:45:14.934843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.588 [2024-10-01 03:45:14.934854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:22.588 [2024-10-01 03:45:14.934863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:18:22.588 [2024-10-01 03:45:14.934871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.588 [2024-10-01 03:45:14.934890] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:22.588 [2024-10-01 03:45:14.935551] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:22.588 [2024-10-01 03:45:14.935756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.588 [2024-10-01 03:45:14.935768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:22.588 [2024-10-01 03:45:14.935778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.869 ms 00:18:22.588 [2024-10-01 03:45:14.935786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.588 [2024-10-01 03:45:14.937448] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:22.588 [2024-10-01 03:45:14.950157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.588 [2024-10-01 03:45:14.950340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:22.588 [2024-10-01 03:45:14.950360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.711 ms 00:18:22.588 [2024-10-01 03:45:14.950370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.588 [2024-10-01 03:45:14.950694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.588 [2024-10-01 03:45:14.950722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:22.588 [2024-10-01 03:45:14.950734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:22.588 [2024-10-01 03:45:14.950742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.588 [2024-10-01 03:45:14.957441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.588 [2024-10-01 03:45:14.957639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:22.588 [2024-10-01 03:45:14.957657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.619 ms 00:18:22.588 [2024-10-01 03:45:14.957666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.588 [2024-10-01 03:45:14.957747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.588 [2024-10-01 03:45:14.957757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:22.588 [2024-10-01 03:45:14.957766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:18:22.588 [2024-10-01 03:45:14.957774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.588 [2024-10-01 03:45:14.957829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.588 [2024-10-01 03:45:14.957840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:22.588 [2024-10-01 03:45:14.957848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:22.588 [2024-10-01 03:45:14.957856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.588 [2024-10-01 03:45:14.957883] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:22.588 [2024-10-01 03:45:14.961482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.589 [2024-10-01 03:45:14.961510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:22.589 [2024-10-01 03:45:14.961520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.608 ms 00:18:22.589 [2024-10-01 03:45:14.961528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.589 [2024-10-01 03:45:14.961559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.589 [2024-10-01 03:45:14.961568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:22.589 [2024-10-01 03:45:14.961577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:22.589 [2024-10-01 03:45:14.961585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.589 [2024-10-01 03:45:14.961617] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:22.589 [2024-10-01 03:45:14.961639] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:22.589 [2024-10-01 03:45:14.961676] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:22.589 [2024-10-01 03:45:14.961692] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:22.589 [2024-10-01 03:45:14.961800] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:22.589 [2024-10-01 03:45:14.961812] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:22.589 [2024-10-01 03:45:14.961822] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:22.589 [2024-10-01 03:45:14.961836] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:22.589 [2024-10-01 03:45:14.961847] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:22.589 [2024-10-01 03:45:14.961855] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:22.589 [2024-10-01 03:45:14.961863] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:22.589 [2024-10-01 03:45:14.961871] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:22.589 [2024-10-01 03:45:14.961879] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:22.589 [2024-10-01 03:45:14.961886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.589 [2024-10-01 03:45:14.961895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:22.589 [2024-10-01 03:45:14.961904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.272 ms 00:18:22.589 [2024-10-01 03:45:14.961911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.589 [2024-10-01 03:45:14.961998] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.589 [2024-10-01 03:45:14.962026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:22.589 [2024-10-01 03:45:14.962034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:18:22.589 [2024-10-01 03:45:14.962042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.589 [2024-10-01 03:45:14.962158] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:22.589 [2024-10-01 03:45:14.962170] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:22.589 [2024-10-01 03:45:14.962179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.589 [2024-10-01 03:45:14.962187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:22.589 [2024-10-01 03:45:14.962210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962218] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:22.589 [2024-10-01 03:45:14.962226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:22.589 [2024-10-01 03:45:14.962233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.589 [2024-10-01 03:45:14.962252] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:22.589 [2024-10-01 03:45:14.962260] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:22.589 [2024-10-01 03:45:14.962273] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:22.589 [2024-10-01 03:45:14.962286] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:22.589 [2024-10-01 03:45:14.962294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:22.589 [2024-10-01 03:45:14.962301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:22.589 [2024-10-01 03:45:14.962317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:22.589 [2024-10-01 03:45:14.962323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:22.589 [2024-10-01 03:45:14.962339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.589 [2024-10-01 03:45:14.962354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:22.589 [2024-10-01 03:45:14.962361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.589 [2024-10-01 03:45:14.962375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:22.589 [2024-10-01 03:45:14.962381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.589 [2024-10-01 03:45:14.962396] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:22.589 [2024-10-01 03:45:14.962403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962410] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:22.589 [2024-10-01 03:45:14.962417] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:22.589 [2024-10-01 03:45:14.962424] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.589 [2024-10-01 03:45:14.962439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:22.589 [2024-10-01 03:45:14.962445] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:22.589 [2024-10-01 03:45:14.962452] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:22.589 [2024-10-01 03:45:14.962459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:22.589 [2024-10-01 03:45:14.962466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:22.589 [2024-10-01 03:45:14.962473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:22.589 [2024-10-01 03:45:14.962496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:22.589 [2024-10-01 03:45:14.962503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962509] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:22.589 [2024-10-01 03:45:14.962517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:22.589 [2024-10-01 03:45:14.962527] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:22.589 [2024-10-01 03:45:14.962534] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:22.589 [2024-10-01 03:45:14.962544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:22.589 [2024-10-01 03:45:14.962552] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:22.589 [2024-10-01 03:45:14.962559] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:22.589 [2024-10-01 03:45:14.962567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:22.589 [2024-10-01 03:45:14.962575] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:22.589 [2024-10-01 03:45:14.962582] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:22.589 [2024-10-01 03:45:14.962591] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:22.589 [2024-10-01 03:45:14.962600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.589 [2024-10-01 03:45:14.962609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:22.589 [2024-10-01 03:45:14.962617] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:22.589 [2024-10-01 03:45:14.962625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:22.589 [2024-10-01 03:45:14.962633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:22.589 [2024-10-01 03:45:14.962641] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:22.589 [2024-10-01 03:45:14.962648] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:22.589 [2024-10-01 03:45:14.962659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:22.589 [2024-10-01 03:45:14.962667] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:22.589 [2024-10-01 03:45:14.962675] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:22.589 [2024-10-01 03:45:14.962683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:22.589 [2024-10-01 03:45:14.962690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:22.589 [2024-10-01 03:45:14.962697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:22.589 [2024-10-01 03:45:14.962704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:22.589 [2024-10-01 03:45:14.962713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:22.589 [2024-10-01 03:45:14.962720] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:22.589 [2024-10-01 03:45:14.962729] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:22.589 [2024-10-01 03:45:14.962738] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:22.590 [2024-10-01 03:45:14.962745] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:22.590 [2024-10-01 03:45:14.962752] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:22.590 [2024-10-01 03:45:14.962759] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:22.590 [2024-10-01 03:45:14.962767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:14.962774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:22.590 [2024-10-01 03:45:14.962782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.679 ms 00:18:22.590 [2024-10-01 03:45:14.962789] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.000088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.000348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:22.590 [2024-10-01 03:45:15.000374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.248 ms 00:18:22.590 [2024-10-01 03:45:15.000386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.000530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.000543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:22.590 [2024-10-01 03:45:15.000554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:18:22.590 [2024-10-01 03:45:15.000563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.033176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.033224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:22.590 [2024-10-01 03:45:15.033240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.523 ms 00:18:22.590 [2024-10-01 03:45:15.033248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.033303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.033313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:22.590 [2024-10-01 03:45:15.033322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:22.590 [2024-10-01 03:45:15.033330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.033790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.033806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:22.590 [2024-10-01 03:45:15.033815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:18:22.590 [2024-10-01 03:45:15.033827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.033976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.033986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:22.590 [2024-10-01 03:45:15.033995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:18:22.590 [2024-10-01 03:45:15.034035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.047311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.047505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:22.590 [2024-10-01 03:45:15.047521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.255 ms 00:18:22.590 [2024-10-01 03:45:15.047530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.060328] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:18:22.590 [2024-10-01 03:45:15.060370] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:22.590 [2024-10-01 03:45:15.060383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.060392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:22.590 [2024-10-01 03:45:15.060402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.731 ms 00:18:22.590 [2024-10-01 03:45:15.060410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.084967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.085024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:22.590 [2024-10-01 03:45:15.085037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.508 ms 00:18:22.590 [2024-10-01 03:45:15.085046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.097427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.097468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:22.590 [2024-10-01 03:45:15.097479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.324 ms 00:18:22.590 [2024-10-01 03:45:15.097487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.108995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.109189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:22.590 [2024-10-01 03:45:15.109207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.464 ms 00:18:22.590 [2024-10-01 03:45:15.109216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.590 [2024-10-01 03:45:15.109868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.590 [2024-10-01 03:45:15.109890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:22.590 [2024-10-01 03:45:15.109899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:18:22.590 [2024-10-01 03:45:15.109907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.848 [2024-10-01 03:45:15.168860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.848 [2024-10-01 03:45:15.169135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:22.848 [2024-10-01 03:45:15.169155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 58.932 ms 00:18:22.848 [2024-10-01 03:45:15.169164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.848 [2024-10-01 03:45:15.180881] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:22.848 [2024-10-01 03:45:15.184334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.848 [2024-10-01 03:45:15.184372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:22.848 [2024-10-01 03:45:15.184386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.012 ms 00:18:22.848 [2024-10-01 03:45:15.184395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.848 [2024-10-01 03:45:15.184533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.848 [2024-10-01 03:45:15.184545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:22.848 [2024-10-01 03:45:15.184555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:22.848 [2024-10-01 03:45:15.184564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.848 [2024-10-01 03:45:15.184639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.848 [2024-10-01 03:45:15.184649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:22.848 [2024-10-01 03:45:15.184658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:18:22.848 [2024-10-01 03:45:15.184666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.848 [2024-10-01 03:45:15.184687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.848 [2024-10-01 03:45:15.184698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:22.848 [2024-10-01 03:45:15.184706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:22.848 [2024-10-01 03:45:15.184714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.848 [2024-10-01 03:45:15.184748] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:22.848 [2024-10-01 03:45:15.184759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.848 [2024-10-01 03:45:15.184767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:22.848 [2024-10-01 03:45:15.184775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:22.848 [2024-10-01 03:45:15.184787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.848 [2024-10-01 03:45:15.208570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.848 [2024-10-01 03:45:15.208617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:22.848 [2024-10-01 03:45:15.208631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.762 ms 00:18:22.848 [2024-10-01 03:45:15.208639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.848 [2024-10-01 03:45:15.208723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:22.848 [2024-10-01 03:45:15.208734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:22.848 [2024-10-01 03:45:15.208743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:22.848 [2024-10-01 03:45:15.208751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:22.848 [2024-10-01 03:45:15.209814] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 274.648 ms, result 0 00:18:45.740  Copying: 47/1024 [MB] (47 MBps) Copying: 94/1024 [MB] (46 MBps) Copying: 139/1024 [MB] (45 MBps) Copying: 183/1024 [MB] (43 MBps) Copying: 226/1024 [MB] (43 MBps) Copying: 272/1024 [MB] (45 MBps) Copying: 315/1024 [MB] (43 MBps) Copying: 358/1024 [MB] (42 MBps) Copying: 401/1024 [MB] (42 MBps) Copying: 444/1024 [MB] (43 MBps) Copying: 486/1024 [MB] (42 MBps) Copying: 529/1024 [MB] (43 MBps) Copying: 572/1024 [MB] (42 MBps) Copying: 615/1024 [MB] (43 MBps) Copying: 665/1024 [MB] (49 MBps) Copying: 711/1024 [MB] (46 MBps) Copying: 755/1024 [MB] (43 MBps) Copying: 798/1024 [MB] (43 MBps) Copying: 843/1024 [MB] (45 MBps) Copying: 886/1024 [MB] (42 MBps) Copying: 930/1024 [MB] (44 MBps) Copying: 979/1024 [MB] (48 MBps) Copying: 1022/1024 [MB] (43 MBps) Copying: 1024/1024 [MB] (average 44 MBps)[2024-10-01 03:45:38.258400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.740 [2024-10-01 03:45:38.258456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:45.740 [2024-10-01 03:45:38.258471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:45.740 [2024-10-01 03:45:38.258479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.740 [2024-10-01 03:45:38.258514] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:45.740 [2024-10-01 03:45:38.261355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.740 [2024-10-01 03:45:38.261389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:45.740 [2024-10-01 03:45:38.261400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.826 ms 00:18:45.740 [2024-10-01 03:45:38.261408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.740 [2024-10-01 03:45:38.262888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.740 [2024-10-01 03:45:38.262920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:45.740 [2024-10-01 03:45:38.262930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.459 ms 00:18:45.740 [2024-10-01 03:45:38.262937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.740 [2024-10-01 03:45:38.277216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.740 [2024-10-01 03:45:38.277262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:45.740 [2024-10-01 03:45:38.277276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.262 ms 00:18:45.740 [2024-10-01 03:45:38.277285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.740 [2024-10-01 03:45:38.283417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.740 [2024-10-01 03:45:38.283445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:45.740 [2024-10-01 03:45:38.283456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.099 ms 00:18:45.740 [2024-10-01 03:45:38.283465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.999 [2024-10-01 03:45:38.307607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.999 [2024-10-01 03:45:38.307654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:45.999 [2024-10-01 03:45:38.307666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.087 ms 00:18:45.999 [2024-10-01 03:45:38.307673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.999 [2024-10-01 03:45:38.322112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.999 [2024-10-01 03:45:38.322145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:45.999 [2024-10-01 03:45:38.322161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.405 ms 00:18:45.999 [2024-10-01 03:45:38.322170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.999 [2024-10-01 03:45:38.322293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.999 [2024-10-01 03:45:38.322304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:45.999 [2024-10-01 03:45:38.322313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:18:45.999 [2024-10-01 03:45:38.322321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.999 [2024-10-01 03:45:38.345450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.999 [2024-10-01 03:45:38.345484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:45.999 [2024-10-01 03:45:38.345495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.115 ms 00:18:45.999 [2024-10-01 03:45:38.345502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.999 [2024-10-01 03:45:38.367905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.999 [2024-10-01 03:45:38.367934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:45.999 [2024-10-01 03:45:38.367944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.371 ms 00:18:45.999 [2024-10-01 03:45:38.367952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.999 [2024-10-01 03:45:38.390389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.999 [2024-10-01 03:45:38.390416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:45.999 [2024-10-01 03:45:38.390426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.407 ms 00:18:45.999 [2024-10-01 03:45:38.390434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.999 [2024-10-01 03:45:38.412566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:45.999 [2024-10-01 03:45:38.412610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:45.999 [2024-10-01 03:45:38.412620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.077 ms 00:18:45.999 [2024-10-01 03:45:38.412627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:45.999 [2024-10-01 03:45:38.412659] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:45.999 [2024-10-01 03:45:38.412675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412700] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.412994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:45.999 [2024-10-01 03:45:38.413145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:46.000 [2024-10-01 03:45:38.413494] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:46.000 [2024-10-01 03:45:38.413501] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3bc97d54-35af-4bbe-b537-d54a52a02875 00:18:46.000 [2024-10-01 03:45:38.413509] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:46.000 [2024-10-01 03:45:38.413517] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:46.000 [2024-10-01 03:45:38.413523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:46.000 [2024-10-01 03:45:38.413531] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:46.000 [2024-10-01 03:45:38.413540] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:46.000 [2024-10-01 03:45:38.413548] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:46.000 [2024-10-01 03:45:38.413560] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:46.000 [2024-10-01 03:45:38.413566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:46.000 [2024-10-01 03:45:38.413572] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:46.000 [2024-10-01 03:45:38.413580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.000 [2024-10-01 03:45:38.413587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:46.000 [2024-10-01 03:45:38.413603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:18:46.000 [2024-10-01 03:45:38.413611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.000 [2024-10-01 03:45:38.426485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.000 [2024-10-01 03:45:38.426665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:46.000 [2024-10-01 03:45:38.426681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.858 ms 00:18:46.000 [2024-10-01 03:45:38.426689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.000 [2024-10-01 03:45:38.427074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:46.000 [2024-10-01 03:45:38.427087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:46.000 [2024-10-01 03:45:38.427096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:18:46.000 [2024-10-01 03:45:38.427104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.000 [2024-10-01 03:45:38.456418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.000 [2024-10-01 03:45:38.456457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:46.000 [2024-10-01 03:45:38.456468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.000 [2024-10-01 03:45:38.456481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.000 [2024-10-01 03:45:38.456542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.000 [2024-10-01 03:45:38.456551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:46.000 [2024-10-01 03:45:38.456559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.000 [2024-10-01 03:45:38.456566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.000 [2024-10-01 03:45:38.456620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.000 [2024-10-01 03:45:38.456631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:46.000 [2024-10-01 03:45:38.456639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.000 [2024-10-01 03:45:38.456647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.000 [2024-10-01 03:45:38.456667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.000 [2024-10-01 03:45:38.456676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:46.000 [2024-10-01 03:45:38.456684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.000 [2024-10-01 03:45:38.456692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.000 [2024-10-01 03:45:38.538378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.000 [2024-10-01 03:45:38.538433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:46.000 [2024-10-01 03:45:38.538446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.000 [2024-10-01 03:45:38.538454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.258 [2024-10-01 03:45:38.604462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.258 [2024-10-01 03:45:38.604698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:46.258 [2024-10-01 03:45:38.604717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.258 [2024-10-01 03:45:38.604726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.258 [2024-10-01 03:45:38.604813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.258 [2024-10-01 03:45:38.604823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:46.258 [2024-10-01 03:45:38.604831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.258 [2024-10-01 03:45:38.604840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.258 [2024-10-01 03:45:38.604874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.258 [2024-10-01 03:45:38.604887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:46.258 [2024-10-01 03:45:38.604895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.258 [2024-10-01 03:45:38.604903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.258 [2024-10-01 03:45:38.604997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.258 [2024-10-01 03:45:38.605033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:46.258 [2024-10-01 03:45:38.605042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.258 [2024-10-01 03:45:38.605050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.258 [2024-10-01 03:45:38.605082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.258 [2024-10-01 03:45:38.605091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:46.258 [2024-10-01 03:45:38.605102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.258 [2024-10-01 03:45:38.605110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.258 [2024-10-01 03:45:38.605150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.258 [2024-10-01 03:45:38.605159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:46.258 [2024-10-01 03:45:38.605167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.258 [2024-10-01 03:45:38.605175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.258 [2024-10-01 03:45:38.605220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:46.258 [2024-10-01 03:45:38.605233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:46.258 [2024-10-01 03:45:38.605241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:46.258 [2024-10-01 03:45:38.605248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:46.259 [2024-10-01 03:45:38.605367] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 346.936 ms, result 0 00:18:48.157 00:18:48.157 00:18:48.157 03:45:40 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:18:48.157 [2024-10-01 03:45:40.362807] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:18:48.157 [2024-10-01 03:45:40.363147] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75231 ] 00:18:48.157 [2024-10-01 03:45:40.504952] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.415 [2024-10-01 03:45:40.715483] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.674 [2024-10-01 03:45:40.987908] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:48.674 [2024-10-01 03:45:40.987981] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:18:48.674 [2024-10-01 03:45:41.142986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.674 [2024-10-01 03:45:41.143057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:48.674 [2024-10-01 03:45:41.143073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:18:48.674 [2024-10-01 03:45:41.143087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.674 [2024-10-01 03:45:41.143139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.674 [2024-10-01 03:45:41.143150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:48.674 [2024-10-01 03:45:41.143158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:48.674 [2024-10-01 03:45:41.143165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.674 [2024-10-01 03:45:41.143186] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:48.674 [2024-10-01 03:45:41.143837] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:48.674 [2024-10-01 03:45:41.143858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.674 [2024-10-01 03:45:41.143866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:48.674 [2024-10-01 03:45:41.143876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.678 ms 00:18:48.674 [2024-10-01 03:45:41.143884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.674 [2024-10-01 03:45:41.145263] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:18:48.674 [2024-10-01 03:45:41.158063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.674 [2024-10-01 03:45:41.158235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:18:48.674 [2024-10-01 03:45:41.158254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.801 ms 00:18:48.674 [2024-10-01 03:45:41.158263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.674 [2024-10-01 03:45:41.158317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.674 [2024-10-01 03:45:41.158327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:18:48.674 [2024-10-01 03:45:41.158335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:18:48.674 [2024-10-01 03:45:41.158343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.674 [2024-10-01 03:45:41.164968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.674 [2024-10-01 03:45:41.165016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:48.674 [2024-10-01 03:45:41.165027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.573 ms 00:18:48.674 [2024-10-01 03:45:41.165035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.674 [2024-10-01 03:45:41.165119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.674 [2024-10-01 03:45:41.165130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:48.674 [2024-10-01 03:45:41.165139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:18:48.674 [2024-10-01 03:45:41.165147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.674 [2024-10-01 03:45:41.165201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.674 [2024-10-01 03:45:41.165212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:48.674 [2024-10-01 03:45:41.165220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:18:48.674 [2024-10-01 03:45:41.165228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.674 [2024-10-01 03:45:41.165251] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:48.674 [2024-10-01 03:45:41.168954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.674 [2024-10-01 03:45:41.168983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:48.674 [2024-10-01 03:45:41.168993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.710 ms 00:18:48.674 [2024-10-01 03:45:41.169018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.674 [2024-10-01 03:45:41.169049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.674 [2024-10-01 03:45:41.169057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:48.674 [2024-10-01 03:45:41.169066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:48.675 [2024-10-01 03:45:41.169074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.675 [2024-10-01 03:45:41.169099] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:18:48.675 [2024-10-01 03:45:41.169120] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:18:48.675 [2024-10-01 03:45:41.169157] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:18:48.675 [2024-10-01 03:45:41.169173] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:18:48.675 [2024-10-01 03:45:41.169279] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:48.675 [2024-10-01 03:45:41.169290] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:48.675 [2024-10-01 03:45:41.169301] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:48.675 [2024-10-01 03:45:41.169315] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:48.675 [2024-10-01 03:45:41.169324] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:48.675 [2024-10-01 03:45:41.169333] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:48.675 [2024-10-01 03:45:41.169340] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:48.675 [2024-10-01 03:45:41.169348] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:48.675 [2024-10-01 03:45:41.169356] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:48.675 [2024-10-01 03:45:41.169364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.675 [2024-10-01 03:45:41.169372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:48.675 [2024-10-01 03:45:41.169380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:18:48.675 [2024-10-01 03:45:41.169387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.675 [2024-10-01 03:45:41.169484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.675 [2024-10-01 03:45:41.169596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:48.675 [2024-10-01 03:45:41.169608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:48.675 [2024-10-01 03:45:41.169617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.675 [2024-10-01 03:45:41.169725] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:48.675 [2024-10-01 03:45:41.169736] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:48.675 [2024-10-01 03:45:41.169745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:48.675 [2024-10-01 03:45:41.169753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.675 [2024-10-01 03:45:41.169761] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:48.675 [2024-10-01 03:45:41.169768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:48.675 [2024-10-01 03:45:41.169775] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:48.675 [2024-10-01 03:45:41.169782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:48.675 [2024-10-01 03:45:41.169790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:48.675 [2024-10-01 03:45:41.169798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:48.675 [2024-10-01 03:45:41.169805] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:48.675 [2024-10-01 03:45:41.169812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:48.675 [2024-10-01 03:45:41.169818] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:48.675 [2024-10-01 03:45:41.169832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:48.675 [2024-10-01 03:45:41.169839] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:48.675 [2024-10-01 03:45:41.169847] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.675 [2024-10-01 03:45:41.169855] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:48.675 [2024-10-01 03:45:41.169863] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:48.675 [2024-10-01 03:45:41.169869] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.675 [2024-10-01 03:45:41.169876] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:48.675 [2024-10-01 03:45:41.169883] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:48.675 [2024-10-01 03:45:41.169890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.675 [2024-10-01 03:45:41.169898] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:48.675 [2024-10-01 03:45:41.169905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:48.675 [2024-10-01 03:45:41.169912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.675 [2024-10-01 03:45:41.169919] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:48.675 [2024-10-01 03:45:41.169926] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:48.675 [2024-10-01 03:45:41.169932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.675 [2024-10-01 03:45:41.169938] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:48.675 [2024-10-01 03:45:41.169945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:48.675 [2024-10-01 03:45:41.169952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:48.675 [2024-10-01 03:45:41.169959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:48.675 [2024-10-01 03:45:41.169965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:48.675 [2024-10-01 03:45:41.169972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:48.675 [2024-10-01 03:45:41.169979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:48.675 [2024-10-01 03:45:41.169986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:48.675 [2024-10-01 03:45:41.169992] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:48.675 [2024-10-01 03:45:41.169999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:48.675 [2024-10-01 03:45:41.170018] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:48.675 [2024-10-01 03:45:41.170026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.675 [2024-10-01 03:45:41.170033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:48.675 [2024-10-01 03:45:41.170039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:48.675 [2024-10-01 03:45:41.170046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.675 [2024-10-01 03:45:41.170053] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:48.675 [2024-10-01 03:45:41.170061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:48.675 [2024-10-01 03:45:41.170070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:48.675 [2024-10-01 03:45:41.170079] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:48.675 [2024-10-01 03:45:41.170087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:48.675 [2024-10-01 03:45:41.170094] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:48.675 [2024-10-01 03:45:41.170101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:48.675 [2024-10-01 03:45:41.170108] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:48.675 [2024-10-01 03:45:41.170115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:48.675 [2024-10-01 03:45:41.170122] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:48.675 [2024-10-01 03:45:41.170131] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:48.675 [2024-10-01 03:45:41.170141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:48.675 [2024-10-01 03:45:41.170149] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:48.675 [2024-10-01 03:45:41.170157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:48.675 [2024-10-01 03:45:41.170166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:48.675 [2024-10-01 03:45:41.170173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:48.675 [2024-10-01 03:45:41.170180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:48.675 [2024-10-01 03:45:41.170188] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:48.675 [2024-10-01 03:45:41.170195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:48.675 [2024-10-01 03:45:41.170202] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:48.675 [2024-10-01 03:45:41.170209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:48.675 [2024-10-01 03:45:41.170217] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:48.675 [2024-10-01 03:45:41.170225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:48.675 [2024-10-01 03:45:41.170232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:48.675 [2024-10-01 03:45:41.170239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:48.675 [2024-10-01 03:45:41.170247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:48.675 [2024-10-01 03:45:41.170254] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:48.675 [2024-10-01 03:45:41.170262] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:48.675 [2024-10-01 03:45:41.170271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:48.675 [2024-10-01 03:45:41.170278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:48.675 [2024-10-01 03:45:41.170286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:48.675 [2024-10-01 03:45:41.170293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:48.675 [2024-10-01 03:45:41.170301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.676 [2024-10-01 03:45:41.170308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:48.676 [2024-10-01 03:45:41.170316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.646 ms 00:18:48.676 [2024-10-01 03:45:41.170323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.676 [2024-10-01 03:45:41.210771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.676 [2024-10-01 03:45:41.210818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:48.676 [2024-10-01 03:45:41.210831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.400 ms 00:18:48.676 [2024-10-01 03:45:41.210840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.676 [2024-10-01 03:45:41.210949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.676 [2024-10-01 03:45:41.210959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:48.676 [2024-10-01 03:45:41.210968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:18:48.676 [2024-10-01 03:45:41.210976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.243576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.934 [2024-10-01 03:45:41.243617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:48.934 [2024-10-01 03:45:41.243631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.519 ms 00:18:48.934 [2024-10-01 03:45:41.243639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.243683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.934 [2024-10-01 03:45:41.243692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:48.934 [2024-10-01 03:45:41.243701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:18:48.934 [2024-10-01 03:45:41.243709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.244205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.934 [2024-10-01 03:45:41.244222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:48.934 [2024-10-01 03:45:41.244232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.429 ms 00:18:48.934 [2024-10-01 03:45:41.244244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.244371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.934 [2024-10-01 03:45:41.244382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:48.934 [2024-10-01 03:45:41.244390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:18:48.934 [2024-10-01 03:45:41.244398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.257815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.934 [2024-10-01 03:45:41.257844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:48.934 [2024-10-01 03:45:41.257854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.394 ms 00:18:48.934 [2024-10-01 03:45:41.257862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.270714] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:18:48.934 [2024-10-01 03:45:41.270748] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:18:48.934 [2024-10-01 03:45:41.270760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.934 [2024-10-01 03:45:41.270769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:18:48.934 [2024-10-01 03:45:41.270778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.799 ms 00:18:48.934 [2024-10-01 03:45:41.270786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.294953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.934 [2024-10-01 03:45:41.295016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:18:48.934 [2024-10-01 03:45:41.295029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.123 ms 00:18:48.934 [2024-10-01 03:45:41.295037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.306761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.934 [2024-10-01 03:45:41.306796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:18:48.934 [2024-10-01 03:45:41.306807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.673 ms 00:18:48.934 [2024-10-01 03:45:41.306815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.318058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.934 [2024-10-01 03:45:41.318088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:18:48.934 [2024-10-01 03:45:41.318098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.199 ms 00:18:48.934 [2024-10-01 03:45:41.318106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.318749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.934 [2024-10-01 03:45:41.318770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:48.934 [2024-10-01 03:45:41.318779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:18:48.934 [2024-10-01 03:45:41.318787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.376214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.934 [2024-10-01 03:45:41.376469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:18:48.934 [2024-10-01 03:45:41.376490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.408 ms 00:18:48.934 [2024-10-01 03:45:41.376499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.934 [2024-10-01 03:45:41.387384] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:18:48.935 [2024-10-01 03:45:41.390312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.935 [2024-10-01 03:45:41.390342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:48.935 [2024-10-01 03:45:41.390355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.770 ms 00:18:48.935 [2024-10-01 03:45:41.390368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.935 [2024-10-01 03:45:41.390484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.935 [2024-10-01 03:45:41.390495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:18:48.935 [2024-10-01 03:45:41.390515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:48.935 [2024-10-01 03:45:41.390524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.935 [2024-10-01 03:45:41.390596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.935 [2024-10-01 03:45:41.390607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:48.935 [2024-10-01 03:45:41.390616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:48.935 [2024-10-01 03:45:41.390625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.935 [2024-10-01 03:45:41.390649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.935 [2024-10-01 03:45:41.390658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:48.935 [2024-10-01 03:45:41.390667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:48.935 [2024-10-01 03:45:41.390675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.935 [2024-10-01 03:45:41.390709] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:18:48.935 [2024-10-01 03:45:41.390720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.935 [2024-10-01 03:45:41.390729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:18:48.935 [2024-10-01 03:45:41.390741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:18:48.935 [2024-10-01 03:45:41.390748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.935 [2024-10-01 03:45:41.414304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.935 [2024-10-01 03:45:41.414435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:48.935 [2024-10-01 03:45:41.414488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.537 ms 00:18:48.935 [2024-10-01 03:45:41.414519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.935 [2024-10-01 03:45:41.414603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:48.935 [2024-10-01 03:45:41.414629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:48.935 [2024-10-01 03:45:41.414649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:18:48.935 [2024-10-01 03:45:41.414668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:48.935 [2024-10-01 03:45:41.416023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 272.560 ms, result 0 00:19:11.310  Copying: 45/1024 [MB] (45 MBps) Copying: 91/1024 [MB] (46 MBps) Copying: 138/1024 [MB] (46 MBps) Copying: 185/1024 [MB] (47 MBps) Copying: 232/1024 [MB] (46 MBps) Copying: 278/1024 [MB] (46 MBps) Copying: 324/1024 [MB] (45 MBps) Copying: 373/1024 [MB] (49 MBps) Copying: 418/1024 [MB] (44 MBps) Copying: 464/1024 [MB] (45 MBps) Copying: 511/1024 [MB] (47 MBps) Copying: 559/1024 [MB] (47 MBps) Copying: 605/1024 [MB] (46 MBps) Copying: 653/1024 [MB] (47 MBps) Copying: 699/1024 [MB] (46 MBps) Copying: 744/1024 [MB] (44 MBps) Copying: 790/1024 [MB] (46 MBps) Copying: 837/1024 [MB] (46 MBps) Copying: 884/1024 [MB] (47 MBps) Copying: 930/1024 [MB] (45 MBps) Copying: 978/1024 [MB] (47 MBps) Copying: 1024/1024 [MB] (average 46 MBps)[2024-10-01 03:46:03.679043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.679134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:11.310 [2024-10-01 03:46:03.679158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:11.310 [2024-10-01 03:46:03.679181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.679219] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:11.310 [2024-10-01 03:46:03.685047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.685098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:11.310 [2024-10-01 03:46:03.685116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.804 ms 00:19:11.310 [2024-10-01 03:46:03.685130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.685517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.685543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:11.310 [2024-10-01 03:46:03.685558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.349 ms 00:19:11.310 [2024-10-01 03:46:03.685572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.691481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.691504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:11.310 [2024-10-01 03:46:03.691513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.882 ms 00:19:11.310 [2024-10-01 03:46:03.691521] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.697695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.697736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:11.310 [2024-10-01 03:46:03.697746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.158 ms 00:19:11.310 [2024-10-01 03:46:03.697754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.721888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.721924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:11.310 [2024-10-01 03:46:03.721936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.073 ms 00:19:11.310 [2024-10-01 03:46:03.721944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.736204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.736240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:11.310 [2024-10-01 03:46:03.736251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.236 ms 00:19:11.310 [2024-10-01 03:46:03.736259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.736387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.736399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:11.310 [2024-10-01 03:46:03.736409] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:19:11.310 [2024-10-01 03:46:03.736417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.759407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.759441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:11.310 [2024-10-01 03:46:03.759452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.976 ms 00:19:11.310 [2024-10-01 03:46:03.759460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.782467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.782505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:11.310 [2024-10-01 03:46:03.782516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.987 ms 00:19:11.310 [2024-10-01 03:46:03.782531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.804695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.804747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:11.310 [2024-10-01 03:46:03.804759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.142 ms 00:19:11.310 [2024-10-01 03:46:03.804767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.826769] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.310 [2024-10-01 03:46:03.826807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:11.310 [2024-10-01 03:46:03.826818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.955 ms 00:19:11.310 [2024-10-01 03:46:03.826826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.310 [2024-10-01 03:46:03.826848] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:11.310 [2024-10-01 03:46:03.826863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:11.310 [2024-10-01 03:46:03.826874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:11.310 [2024-10-01 03:46:03.826882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.826993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:11.311 [2024-10-01 03:46:03.827609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:11.312 [2024-10-01 03:46:03.827619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:11.312 [2024-10-01 03:46:03.827627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:11.312 [2024-10-01 03:46:03.827634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:11.312 [2024-10-01 03:46:03.827642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:11.312 [2024-10-01 03:46:03.827649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:11.312 [2024-10-01 03:46:03.827657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:11.312 [2024-10-01 03:46:03.827664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:11.312 [2024-10-01 03:46:03.827680] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:11.312 [2024-10-01 03:46:03.827689] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3bc97d54-35af-4bbe-b537-d54a52a02875 00:19:11.312 [2024-10-01 03:46:03.827697] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:11.312 [2024-10-01 03:46:03.827705] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:11.312 [2024-10-01 03:46:03.827713] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:11.312 [2024-10-01 03:46:03.827721] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:11.312 [2024-10-01 03:46:03.827728] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:11.312 [2024-10-01 03:46:03.827741] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:11.312 [2024-10-01 03:46:03.827749] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:11.312 [2024-10-01 03:46:03.827757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:11.312 [2024-10-01 03:46:03.827763] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:11.312 [2024-10-01 03:46:03.827771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.312 [2024-10-01 03:46:03.827786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:11.312 [2024-10-01 03:46:03.827795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.924 ms 00:19:11.312 [2024-10-01 03:46:03.827803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.312 [2024-10-01 03:46:03.840742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.312 [2024-10-01 03:46:03.840983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:11.312 [2024-10-01 03:46:03.841016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.922 ms 00:19:11.312 [2024-10-01 03:46:03.841033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.312 [2024-10-01 03:46:03.841398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:11.312 [2024-10-01 03:46:03.841408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:11.312 [2024-10-01 03:46:03.841417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:19:11.312 [2024-10-01 03:46:03.841424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.571 [2024-10-01 03:46:03.871035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.571 [2024-10-01 03:46:03.871266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:11.571 [2024-10-01 03:46:03.871284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.571 [2024-10-01 03:46:03.871299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.571 [2024-10-01 03:46:03.871371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.571 [2024-10-01 03:46:03.871380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:11.571 [2024-10-01 03:46:03.871389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.571 [2024-10-01 03:46:03.871397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.571 [2024-10-01 03:46:03.871470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.571 [2024-10-01 03:46:03.871482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:11.571 [2024-10-01 03:46:03.871490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.571 [2024-10-01 03:46:03.871498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.571 [2024-10-01 03:46:03.871517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.571 [2024-10-01 03:46:03.871525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:11.571 [2024-10-01 03:46:03.871533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.571 [2024-10-01 03:46:03.871540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.571 [2024-10-01 03:46:03.951594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.571 [2024-10-01 03:46:03.951643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:11.571 [2024-10-01 03:46:03.951655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.571 [2024-10-01 03:46:03.951670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.571 [2024-10-01 03:46:04.017577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.571 [2024-10-01 03:46:04.017632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:11.571 [2024-10-01 03:46:04.017644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.571 [2024-10-01 03:46:04.017652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.571 [2024-10-01 03:46:04.017733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.571 [2024-10-01 03:46:04.017742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:11.571 [2024-10-01 03:46:04.017751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.571 [2024-10-01 03:46:04.017760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.571 [2024-10-01 03:46:04.017800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.571 [2024-10-01 03:46:04.017809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:11.571 [2024-10-01 03:46:04.017817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.571 [2024-10-01 03:46:04.017825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.571 [2024-10-01 03:46:04.017915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.571 [2024-10-01 03:46:04.017925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:11.571 [2024-10-01 03:46:04.017935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.571 [2024-10-01 03:46:04.017943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.571 [2024-10-01 03:46:04.017971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.571 [2024-10-01 03:46:04.017984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:11.571 [2024-10-01 03:46:04.017992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.571 [2024-10-01 03:46:04.018019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.572 [2024-10-01 03:46:04.018057] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.572 [2024-10-01 03:46:04.018082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:11.572 [2024-10-01 03:46:04.018091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.572 [2024-10-01 03:46:04.018098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.572 [2024-10-01 03:46:04.018145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:11.572 [2024-10-01 03:46:04.018156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:11.572 [2024-10-01 03:46:04.018165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:11.572 [2024-10-01 03:46:04.018172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:11.572 [2024-10-01 03:46:04.018292] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 339.267 ms, result 0 00:19:12.507 00:19:12.507 00:19:12.507 03:46:04 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:14.410 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:19:14.411 03:46:06 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:19:14.411 [2024-10-01 03:46:06.909837] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:14.411 [2024-10-01 03:46:06.909960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75511 ] 00:19:14.669 [2024-10-01 03:46:07.057594] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.928 [2024-10-01 03:46:07.271375] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:15.188 [2024-10-01 03:46:07.543770] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:15.188 [2024-10-01 03:46:07.543842] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:15.188 [2024-10-01 03:46:07.698243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.188 [2024-10-01 03:46:07.698312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:15.188 [2024-10-01 03:46:07.698327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:15.188 [2024-10-01 03:46:07.698340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.188 [2024-10-01 03:46:07.698390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.188 [2024-10-01 03:46:07.698401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:15.188 [2024-10-01 03:46:07.698410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:15.188 [2024-10-01 03:46:07.698418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.188 [2024-10-01 03:46:07.698438] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:15.188 [2024-10-01 03:46:07.699175] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:15.188 [2024-10-01 03:46:07.699193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.188 [2024-10-01 03:46:07.699201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:15.188 [2024-10-01 03:46:07.699210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.760 ms 00:19:15.188 [2024-10-01 03:46:07.699218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.188 [2024-10-01 03:46:07.700548] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:15.188 [2024-10-01 03:46:07.713315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.188 [2024-10-01 03:46:07.713349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:15.188 [2024-10-01 03:46:07.713361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.768 ms 00:19:15.188 [2024-10-01 03:46:07.713370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.188 [2024-10-01 03:46:07.713427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.188 [2024-10-01 03:46:07.713437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:15.188 [2024-10-01 03:46:07.713446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:15.188 [2024-10-01 03:46:07.713453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.188 [2024-10-01 03:46:07.719915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.188 [2024-10-01 03:46:07.720138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:15.188 [2024-10-01 03:46:07.720155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.403 ms 00:19:15.188 [2024-10-01 03:46:07.720163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.188 [2024-10-01 03:46:07.720249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.188 [2024-10-01 03:46:07.720259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:15.188 [2024-10-01 03:46:07.720268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:15.188 [2024-10-01 03:46:07.720277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.188 [2024-10-01 03:46:07.720326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.188 [2024-10-01 03:46:07.720337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:15.188 [2024-10-01 03:46:07.720346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:15.188 [2024-10-01 03:46:07.720353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.188 [2024-10-01 03:46:07.720378] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:15.188 [2024-10-01 03:46:07.724007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.188 [2024-10-01 03:46:07.724039] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:15.188 [2024-10-01 03:46:07.724049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.628 ms 00:19:15.188 [2024-10-01 03:46:07.724057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.188 [2024-10-01 03:46:07.724089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.188 [2024-10-01 03:46:07.724098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:15.188 [2024-10-01 03:46:07.724106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:15.188 [2024-10-01 03:46:07.724114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.189 [2024-10-01 03:46:07.724145] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:15.189 [2024-10-01 03:46:07.724165] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:15.189 [2024-10-01 03:46:07.724202] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:15.189 [2024-10-01 03:46:07.724218] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:15.189 [2024-10-01 03:46:07.724324] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:15.189 [2024-10-01 03:46:07.724335] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:15.189 [2024-10-01 03:46:07.724346] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:15.189 [2024-10-01 03:46:07.724360] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:15.189 [2024-10-01 03:46:07.724370] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:15.189 [2024-10-01 03:46:07.724378] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:15.189 [2024-10-01 03:46:07.724386] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:15.189 [2024-10-01 03:46:07.724393] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:15.189 [2024-10-01 03:46:07.724401] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:15.189 [2024-10-01 03:46:07.724409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.189 [2024-10-01 03:46:07.724416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:15.189 [2024-10-01 03:46:07.724425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:19:15.189 [2024-10-01 03:46:07.724433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.189 [2024-10-01 03:46:07.724515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.189 [2024-10-01 03:46:07.724526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:15.189 [2024-10-01 03:46:07.724533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:15.189 [2024-10-01 03:46:07.724541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.189 [2024-10-01 03:46:07.724655] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:15.189 [2024-10-01 03:46:07.724666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:15.189 [2024-10-01 03:46:07.724675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:15.189 [2024-10-01 03:46:07.724683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724690] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:15.189 [2024-10-01 03:46:07.724698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:15.189 [2024-10-01 03:46:07.724714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:15.189 [2024-10-01 03:46:07.724721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:15.189 [2024-10-01 03:46:07.724735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:15.189 [2024-10-01 03:46:07.724743] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:15.189 [2024-10-01 03:46:07.724749] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:15.189 [2024-10-01 03:46:07.724764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:15.189 [2024-10-01 03:46:07.724771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:15.189 [2024-10-01 03:46:07.724777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:15.189 [2024-10-01 03:46:07.724791] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:15.189 [2024-10-01 03:46:07.724798] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:15.189 [2024-10-01 03:46:07.724813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.189 [2024-10-01 03:46:07.724826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:15.189 [2024-10-01 03:46:07.724833] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.189 [2024-10-01 03:46:07.724847] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:15.189 [2024-10-01 03:46:07.724854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.189 [2024-10-01 03:46:07.724867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:15.189 [2024-10-01 03:46:07.724873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:15.189 [2024-10-01 03:46:07.724887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:15.189 [2024-10-01 03:46:07.724894] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:15.189 [2024-10-01 03:46:07.724908] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:15.189 [2024-10-01 03:46:07.724914] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:15.189 [2024-10-01 03:46:07.724921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:15.189 [2024-10-01 03:46:07.724927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:15.189 [2024-10-01 03:46:07.724934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:15.189 [2024-10-01 03:46:07.724940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724947] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:15.189 [2024-10-01 03:46:07.724954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:15.189 [2024-10-01 03:46:07.724962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.189 [2024-10-01 03:46:07.724968] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:15.189 [2024-10-01 03:46:07.724976] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:15.189 [2024-10-01 03:46:07.724985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:15.189 [2024-10-01 03:46:07.724995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:15.189 [2024-10-01 03:46:07.725019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:15.189 [2024-10-01 03:46:07.725027] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:15.189 [2024-10-01 03:46:07.725034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:15.189 [2024-10-01 03:46:07.725041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:15.189 [2024-10-01 03:46:07.725050] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:15.189 [2024-10-01 03:46:07.725058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:15.189 [2024-10-01 03:46:07.725067] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:15.189 [2024-10-01 03:46:07.725087] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:15.189 [2024-10-01 03:46:07.725096] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:15.189 [2024-10-01 03:46:07.725104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:15.189 [2024-10-01 03:46:07.725112] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:15.189 [2024-10-01 03:46:07.725119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:15.189 [2024-10-01 03:46:07.725127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:15.189 [2024-10-01 03:46:07.725135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:15.189 [2024-10-01 03:46:07.725142] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:15.189 [2024-10-01 03:46:07.725150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:15.189 [2024-10-01 03:46:07.725157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:15.189 [2024-10-01 03:46:07.725164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:15.189 [2024-10-01 03:46:07.725172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:15.190 [2024-10-01 03:46:07.725180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:15.190 [2024-10-01 03:46:07.725187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:15.190 [2024-10-01 03:46:07.725195] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:15.190 [2024-10-01 03:46:07.725202] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:15.190 [2024-10-01 03:46:07.725210] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:15.190 [2024-10-01 03:46:07.725219] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:15.190 [2024-10-01 03:46:07.725226] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:15.190 [2024-10-01 03:46:07.725234] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:15.190 [2024-10-01 03:46:07.725241] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:15.190 [2024-10-01 03:46:07.725249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.190 [2024-10-01 03:46:07.725257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:15.190 [2024-10-01 03:46:07.725265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.662 ms 00:19:15.190 [2024-10-01 03:46:07.725272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.769427] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.769478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:15.449 [2024-10-01 03:46:07.769492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.106 ms 00:19:15.449 [2024-10-01 03:46:07.769501] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.769610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.769620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:15.449 [2024-10-01 03:46:07.769629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:15.449 [2024-10-01 03:46:07.769637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.801852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.801897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:15.449 [2024-10-01 03:46:07.801912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.143 ms 00:19:15.449 [2024-10-01 03:46:07.801920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.801968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.801977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:15.449 [2024-10-01 03:46:07.801986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:15.449 [2024-10-01 03:46:07.801994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.802478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.802501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:15.449 [2024-10-01 03:46:07.802512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:19:15.449 [2024-10-01 03:46:07.802523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.802669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.802680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:15.449 [2024-10-01 03:46:07.802689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:19:15.449 [2024-10-01 03:46:07.802697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.816065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.816095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:15.449 [2024-10-01 03:46:07.816105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.348 ms 00:19:15.449 [2024-10-01 03:46:07.816113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.828818] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:15.449 [2024-10-01 03:46:07.828853] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:15.449 [2024-10-01 03:46:07.828865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.828874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:15.449 [2024-10-01 03:46:07.828883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.653 ms 00:19:15.449 [2024-10-01 03:46:07.828892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.853292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.853326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:15.449 [2024-10-01 03:46:07.853338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.353 ms 00:19:15.449 [2024-10-01 03:46:07.853346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.864897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.865092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:15.449 [2024-10-01 03:46:07.865109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.507 ms 00:19:15.449 [2024-10-01 03:46:07.865118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.876278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.876401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:15.449 [2024-10-01 03:46:07.876416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.113 ms 00:19:15.449 [2024-10-01 03:46:07.876424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.877065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.877086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:15.449 [2024-10-01 03:46:07.877097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:19:15.449 [2024-10-01 03:46:07.877106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.936158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.936221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:15.449 [2024-10-01 03:46:07.936235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.032 ms 00:19:15.449 [2024-10-01 03:46:07.936244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.947086] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:15.449 [2024-10-01 03:46:07.950192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.950221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:15.449 [2024-10-01 03:46:07.950235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.897 ms 00:19:15.449 [2024-10-01 03:46:07.950247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.950358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.950369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:15.449 [2024-10-01 03:46:07.950379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:15.449 [2024-10-01 03:46:07.950387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.950459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.950469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:15.449 [2024-10-01 03:46:07.950478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:15.449 [2024-10-01 03:46:07.950486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.950508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.950517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:15.449 [2024-10-01 03:46:07.950525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:15.449 [2024-10-01 03:46:07.950543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.950576] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:15.449 [2024-10-01 03:46:07.950587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.449 [2024-10-01 03:46:07.950595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:15.449 [2024-10-01 03:46:07.950606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:15.449 [2024-10-01 03:46:07.950614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.449 [2024-10-01 03:46:07.974770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.450 [2024-10-01 03:46:07.974909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:15.450 [2024-10-01 03:46:07.974966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.137 ms 00:19:15.450 [2024-10-01 03:46:07.974990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.450 [2024-10-01 03:46:07.975093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.450 [2024-10-01 03:46:07.975120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:15.450 [2024-10-01 03:46:07.975144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:15.450 [2024-10-01 03:46:07.975202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.450 [2024-10-01 03:46:07.976369] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 277.653 ms, result 0 00:19:38.122  Copying: 44/1024 [MB] (44 MBps) Copying: 90/1024 [MB] (45 MBps) Copying: 135/1024 [MB] (45 MBps) Copying: 182/1024 [MB] (46 MBps) Copying: 234/1024 [MB] (52 MBps) Copying: 285/1024 [MB] (50 MBps) Copying: 333/1024 [MB] (47 MBps) Copying: 383/1024 [MB] (50 MBps) Copying: 430/1024 [MB] (46 MBps) Copying: 480/1024 [MB] (50 MBps) Copying: 531/1024 [MB] (50 MBps) Copying: 575/1024 [MB] (44 MBps) Copying: 622/1024 [MB] (46 MBps) Copying: 668/1024 [MB] (46 MBps) Copying: 719/1024 [MB] (50 MBps) Copying: 771/1024 [MB] (51 MBps) Copying: 818/1024 [MB] (46 MBps) Copying: 865/1024 [MB] (47 MBps) Copying: 911/1024 [MB] (45 MBps) Copying: 957/1024 [MB] (46 MBps) Copying: 1003/1024 [MB] (46 MBps) Copying: 1023/1024 [MB] (19 MBps) Copying: 1024/1024 [MB] (average 45 MBps)[2024-10-01 03:46:30.470070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.122 [2024-10-01 03:46:30.470138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:38.122 [2024-10-01 03:46:30.470154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:38.122 [2024-10-01 03:46:30.470164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.122 [2024-10-01 03:46:30.472152] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:38.122 [2024-10-01 03:46:30.477949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.122 [2024-10-01 03:46:30.477984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:38.122 [2024-10-01 03:46:30.477995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.760 ms 00:19:38.122 [2024-10-01 03:46:30.478018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.122 [2024-10-01 03:46:30.489261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.122 [2024-10-01 03:46:30.489358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:38.122 [2024-10-01 03:46:30.489373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.003 ms 00:19:38.122 [2024-10-01 03:46:30.489384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.122 [2024-10-01 03:46:30.507160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.122 [2024-10-01 03:46:30.507192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:38.122 [2024-10-01 03:46:30.507203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.761 ms 00:19:38.122 [2024-10-01 03:46:30.507212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.122 [2024-10-01 03:46:30.513385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.122 [2024-10-01 03:46:30.513410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:38.122 [2024-10-01 03:46:30.513420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.140 ms 00:19:38.122 [2024-10-01 03:46:30.513428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.122 [2024-10-01 03:46:30.537745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.122 [2024-10-01 03:46:30.537781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:38.122 [2024-10-01 03:46:30.537793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.268 ms 00:19:38.122 [2024-10-01 03:46:30.537802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.122 [2024-10-01 03:46:30.551952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.122 [2024-10-01 03:46:30.551986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:38.122 [2024-10-01 03:46:30.551999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.115 ms 00:19:38.122 [2024-10-01 03:46:30.552018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.122 [2024-10-01 03:46:30.605492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.122 [2024-10-01 03:46:30.605549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:38.122 [2024-10-01 03:46:30.605561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 53.436 ms 00:19:38.122 [2024-10-01 03:46:30.605570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.122 [2024-10-01 03:46:30.629443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.122 [2024-10-01 03:46:30.629479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:38.122 [2024-10-01 03:46:30.629491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.858 ms 00:19:38.122 [2024-10-01 03:46:30.629499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.122 [2024-10-01 03:46:30.651768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.122 [2024-10-01 03:46:30.651972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:38.122 [2024-10-01 03:46:30.651989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.235 ms 00:19:38.122 [2024-10-01 03:46:30.651997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.381 [2024-10-01 03:46:30.674121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.381 [2024-10-01 03:46:30.674269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:38.381 [2024-10-01 03:46:30.674284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.056 ms 00:19:38.382 [2024-10-01 03:46:30.674292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.382 [2024-10-01 03:46:30.696410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.382 [2024-10-01 03:46:30.696441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:38.382 [2024-10-01 03:46:30.696451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.064 ms 00:19:38.382 [2024-10-01 03:46:30.696458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.382 [2024-10-01 03:46:30.696488] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:38.382 [2024-10-01 03:46:30.696503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 122112 / 261120 wr_cnt: 1 state: open 00:19:38.382 [2024-10-01 03:46:30.696514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.696992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.697000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.697021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.697030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.697037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.697045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.697053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.697077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:38.382 [2024-10-01 03:46:30.697086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697292] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:38.383 [2024-10-01 03:46:30.697331] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:38.383 [2024-10-01 03:46:30.697339] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3bc97d54-35af-4bbe-b537-d54a52a02875 00:19:38.383 [2024-10-01 03:46:30.697351] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 122112 00:19:38.383 [2024-10-01 03:46:30.697359] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 123072 00:19:38.383 [2024-10-01 03:46:30.697366] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 122112 00:19:38.383 [2024-10-01 03:46:30.697374] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0079 00:19:38.383 [2024-10-01 03:46:30.697381] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:38.383 [2024-10-01 03:46:30.697389] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:38.383 [2024-10-01 03:46:30.697397] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:38.383 [2024-10-01 03:46:30.697404] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:38.383 [2024-10-01 03:46:30.697411] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:38.383 [2024-10-01 03:46:30.697418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.383 [2024-10-01 03:46:30.697431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:38.383 [2024-10-01 03:46:30.697440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.931 ms 00:19:38.383 [2024-10-01 03:46:30.697448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.710569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.383 [2024-10-01 03:46:30.710600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:38.383 [2024-10-01 03:46:30.710611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.105 ms 00:19:38.383 [2024-10-01 03:46:30.710620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.710980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:38.383 [2024-10-01 03:46:30.710990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:38.383 [2024-10-01 03:46:30.711025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.342 ms 00:19:38.383 [2024-10-01 03:46:30.711033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.740161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.383 [2024-10-01 03:46:30.740312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.383 [2024-10-01 03:46:30.740328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.383 [2024-10-01 03:46:30.740337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.740399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.383 [2024-10-01 03:46:30.740407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.383 [2024-10-01 03:46:30.740420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.383 [2024-10-01 03:46:30.740427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.740487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.383 [2024-10-01 03:46:30.740497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.383 [2024-10-01 03:46:30.740505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.383 [2024-10-01 03:46:30.740513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.740529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.383 [2024-10-01 03:46:30.740537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.383 [2024-10-01 03:46:30.740544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.383 [2024-10-01 03:46:30.740554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.812927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.383 [2024-10-01 03:46:30.812977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.383 [2024-10-01 03:46:30.812988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.383 [2024-10-01 03:46:30.812995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.864379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.383 [2024-10-01 03:46:30.864430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.383 [2024-10-01 03:46:30.864441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.383 [2024-10-01 03:46:30.864452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.864532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.383 [2024-10-01 03:46:30.864542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.383 [2024-10-01 03:46:30.864549] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.383 [2024-10-01 03:46:30.864556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.864584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.383 [2024-10-01 03:46:30.864591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.383 [2024-10-01 03:46:30.864597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.383 [2024-10-01 03:46:30.864604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.864685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.383 [2024-10-01 03:46:30.864694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.383 [2024-10-01 03:46:30.864700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.383 [2024-10-01 03:46:30.864707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.864733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.383 [2024-10-01 03:46:30.864741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:38.383 [2024-10-01 03:46:30.864747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.383 [2024-10-01 03:46:30.864754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.383 [2024-10-01 03:46:30.864789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.383 [2024-10-01 03:46:30.864797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.384 [2024-10-01 03:46:30.864805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.384 [2024-10-01 03:46:30.864812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.384 [2024-10-01 03:46:30.864848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.384 [2024-10-01 03:46:30.864856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.384 [2024-10-01 03:46:30.864863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.384 [2024-10-01 03:46:30.864870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.384 [2024-10-01 03:46:30.864975] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 395.791 ms, result 0 00:19:39.836 00:19:39.836 00:19:39.836 03:46:32 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:19:40.094 [2024-10-01 03:46:32.389221] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:19:40.094 [2024-10-01 03:46:32.389348] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75774 ] 00:19:40.094 [2024-10-01 03:46:32.537500] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.353 [2024-10-01 03:46:32.714208] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.612 [2024-10-01 03:46:32.941332] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.612 [2024-10-01 03:46:32.941390] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.612 [2024-10-01 03:46:33.093984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.612 [2024-10-01 03:46:33.094041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:40.612 [2024-10-01 03:46:33.094052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:40.612 [2024-10-01 03:46:33.094063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.612 [2024-10-01 03:46:33.094099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.612 [2024-10-01 03:46:33.094107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.612 [2024-10-01 03:46:33.094113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:40.612 [2024-10-01 03:46:33.094120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.612 [2024-10-01 03:46:33.094135] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:40.612 [2024-10-01 03:46:33.094664] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:40.612 [2024-10-01 03:46:33.094677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.612 [2024-10-01 03:46:33.094684] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.612 [2024-10-01 03:46:33.094691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:19:40.612 [2024-10-01 03:46:33.094696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.612 [2024-10-01 03:46:33.095932] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:40.612 [2024-10-01 03:46:33.106083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.612 [2024-10-01 03:46:33.106251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:40.612 [2024-10-01 03:46:33.106267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.152 ms 00:19:40.612 [2024-10-01 03:46:33.106274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.612 [2024-10-01 03:46:33.106318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.612 [2024-10-01 03:46:33.106326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:40.612 [2024-10-01 03:46:33.106334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:40.612 [2024-10-01 03:46:33.106340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.612 [2024-10-01 03:46:33.112667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.612 [2024-10-01 03:46:33.112696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.612 [2024-10-01 03:46:33.112705] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.280 ms 00:19:40.612 [2024-10-01 03:46:33.112711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.612 [2024-10-01 03:46:33.112770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.612 [2024-10-01 03:46:33.112778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.612 [2024-10-01 03:46:33.112785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:19:40.612 [2024-10-01 03:46:33.112791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.612 [2024-10-01 03:46:33.112832] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.612 [2024-10-01 03:46:33.112841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:40.612 [2024-10-01 03:46:33.112848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:40.612 [2024-10-01 03:46:33.112855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.612 [2024-10-01 03:46:33.112874] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.612 [2024-10-01 03:46:33.115979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.612 [2024-10-01 03:46:33.116014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.612 [2024-10-01 03:46:33.116022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.111 ms 00:19:40.613 [2024-10-01 03:46:33.116028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.613 [2024-10-01 03:46:33.116053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.613 [2024-10-01 03:46:33.116061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:40.613 [2024-10-01 03:46:33.116067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:40.613 [2024-10-01 03:46:33.116073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.613 [2024-10-01 03:46:33.116092] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:40.613 [2024-10-01 03:46:33.116110] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:40.613 [2024-10-01 03:46:33.116142] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:40.613 [2024-10-01 03:46:33.116155] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:40.613 [2024-10-01 03:46:33.116238] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:40.613 [2024-10-01 03:46:33.116247] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:40.613 [2024-10-01 03:46:33.116256] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:40.613 [2024-10-01 03:46:33.116266] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:40.613 [2024-10-01 03:46:33.116275] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:40.613 [2024-10-01 03:46:33.116282] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:40.613 [2024-10-01 03:46:33.116288] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:40.613 [2024-10-01 03:46:33.116294] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:40.613 [2024-10-01 03:46:33.116301] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:40.613 [2024-10-01 03:46:33.116307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.613 [2024-10-01 03:46:33.116315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:40.613 [2024-10-01 03:46:33.116321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.217 ms 00:19:40.613 [2024-10-01 03:46:33.116327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.613 [2024-10-01 03:46:33.116391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.613 [2024-10-01 03:46:33.116401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:40.613 [2024-10-01 03:46:33.116407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:19:40.613 [2024-10-01 03:46:33.116413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.613 [2024-10-01 03:46:33.116498] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:40.613 [2024-10-01 03:46:33.116507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:40.613 [2024-10-01 03:46:33.116514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.613 [2024-10-01 03:46:33.116521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:40.613 [2024-10-01 03:46:33.116534] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116539] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:40.613 [2024-10-01 03:46:33.116545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:40.613 [2024-10-01 03:46:33.116551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.613 [2024-10-01 03:46:33.116562] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:40.613 [2024-10-01 03:46:33.116567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:40.613 [2024-10-01 03:46:33.116573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.613 [2024-10-01 03:46:33.116584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:40.613 [2024-10-01 03:46:33.116590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:40.613 [2024-10-01 03:46:33.116595] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:40.613 [2024-10-01 03:46:33.116608] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:40.613 [2024-10-01 03:46:33.116613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116619] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:40.613 [2024-10-01 03:46:33.116625] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.613 [2024-10-01 03:46:33.116636] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:40.613 [2024-10-01 03:46:33.116642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.613 [2024-10-01 03:46:33.116652] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:40.613 [2024-10-01 03:46:33.116658] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.613 [2024-10-01 03:46:33.116668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:40.613 [2024-10-01 03:46:33.116674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.613 [2024-10-01 03:46:33.116684] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:40.613 [2024-10-01 03:46:33.116689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116695] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.613 [2024-10-01 03:46:33.116700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:40.613 [2024-10-01 03:46:33.116705] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:40.613 [2024-10-01 03:46:33.116710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.613 [2024-10-01 03:46:33.116716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:40.613 [2024-10-01 03:46:33.116721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:40.613 [2024-10-01 03:46:33.116726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:40.613 [2024-10-01 03:46:33.116738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:40.613 [2024-10-01 03:46:33.116742] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116747] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:40.613 [2024-10-01 03:46:33.116753] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:40.613 [2024-10-01 03:46:33.116760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.613 [2024-10-01 03:46:33.116766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.613 [2024-10-01 03:46:33.116773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:40.613 [2024-10-01 03:46:33.116779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:40.613 [2024-10-01 03:46:33.116784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:40.613 [2024-10-01 03:46:33.116790] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:40.613 [2024-10-01 03:46:33.116795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:40.613 [2024-10-01 03:46:33.116800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:40.613 [2024-10-01 03:46:33.116807] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:40.613 [2024-10-01 03:46:33.116814] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.613 [2024-10-01 03:46:33.116820] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:40.613 [2024-10-01 03:46:33.116826] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:40.613 [2024-10-01 03:46:33.116832] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:40.613 [2024-10-01 03:46:33.116837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:40.613 [2024-10-01 03:46:33.116843] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:40.613 [2024-10-01 03:46:33.116849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:40.613 [2024-10-01 03:46:33.116854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:40.613 [2024-10-01 03:46:33.116860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:40.613 [2024-10-01 03:46:33.116865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:40.613 [2024-10-01 03:46:33.116871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:40.613 [2024-10-01 03:46:33.116877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:40.613 [2024-10-01 03:46:33.116884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:40.613 [2024-10-01 03:46:33.116890] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:40.613 [2024-10-01 03:46:33.116896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:40.613 [2024-10-01 03:46:33.116901] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:40.613 [2024-10-01 03:46:33.116907] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.613 [2024-10-01 03:46:33.116913] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:40.613 [2024-10-01 03:46:33.116919] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:40.614 [2024-10-01 03:46:33.116925] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:40.614 [2024-10-01 03:46:33.116931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:40.614 [2024-10-01 03:46:33.116936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.614 [2024-10-01 03:46:33.116942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:40.614 [2024-10-01 03:46:33.116948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:19:40.614 [2024-10-01 03:46:33.116953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.614 [2024-10-01 03:46:33.155195] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.614 [2024-10-01 03:46:33.155234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:40.614 [2024-10-01 03:46:33.155244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.194 ms 00:19:40.614 [2024-10-01 03:46:33.155251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.614 [2024-10-01 03:46:33.155327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.614 [2024-10-01 03:46:33.155335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:40.614 [2024-10-01 03:46:33.155342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:19:40.614 [2024-10-01 03:46:33.155349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.872 [2024-10-01 03:46:33.181433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.872 [2024-10-01 03:46:33.181461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:40.872 [2024-10-01 03:46:33.181472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.034 ms 00:19:40.872 [2024-10-01 03:46:33.181479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.872 [2024-10-01 03:46:33.181507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.872 [2024-10-01 03:46:33.181515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:40.872 [2024-10-01 03:46:33.181521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:19:40.872 [2024-10-01 03:46:33.181527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.872 [2024-10-01 03:46:33.181928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.872 [2024-10-01 03:46:33.181943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:40.873 [2024-10-01 03:46:33.181951] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.362 ms 00:19:40.873 [2024-10-01 03:46:33.181961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.182086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.182094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:40.873 [2024-10-01 03:46:33.182102] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:19:40.873 [2024-10-01 03:46:33.182108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.193168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.193192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.873 [2024-10-01 03:46:33.193200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.044 ms 00:19:40.873 [2024-10-01 03:46:33.193206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.203752] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:19:40.873 [2024-10-01 03:46:33.203782] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:40.873 [2024-10-01 03:46:33.203793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.203800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:40.873 [2024-10-01 03:46:33.203808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.491 ms 00:19:40.873 [2024-10-01 03:46:33.203814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.222575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.222743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:40.873 [2024-10-01 03:46:33.222757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.725 ms 00:19:40.873 [2024-10-01 03:46:33.222764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.231777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.231804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:40.873 [2024-10-01 03:46:33.231812] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.993 ms 00:19:40.873 [2024-10-01 03:46:33.231819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.240285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.240311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:40.873 [2024-10-01 03:46:33.240320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.440 ms 00:19:40.873 [2024-10-01 03:46:33.240326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.240800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.240815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:40.873 [2024-10-01 03:46:33.240822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:19:40.873 [2024-10-01 03:46:33.240829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.288989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.289042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:40.873 [2024-10-01 03:46:33.289054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.145 ms 00:19:40.873 [2024-10-01 03:46:33.289061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.297589] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:19:40.873 [2024-10-01 03:46:33.299947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.299972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:40.873 [2024-10-01 03:46:33.299981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.845 ms 00:19:40.873 [2024-10-01 03:46:33.299992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.300076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.300086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:40.873 [2024-10-01 03:46:33.300094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:40.873 [2024-10-01 03:46:33.300101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.301546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.301574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:40.873 [2024-10-01 03:46:33.301583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.396 ms 00:19:40.873 [2024-10-01 03:46:33.301591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.301619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.301628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:40.873 [2024-10-01 03:46:33.301636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:40.873 [2024-10-01 03:46:33.301643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.301675] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:40.873 [2024-10-01 03:46:33.301686] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.301693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:40.873 [2024-10-01 03:46:33.301703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:40.873 [2024-10-01 03:46:33.301710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.320311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.320339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:40.873 [2024-10-01 03:46:33.320348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.587 ms 00:19:40.873 [2024-10-01 03:46:33.320355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.320414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.873 [2024-10-01 03:46:33.320423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:40.873 [2024-10-01 03:46:33.320429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:40.873 [2024-10-01 03:46:33.320436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.873 [2024-10-01 03:46:33.321327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.936 ms, result 0 00:20:02.823  Copying: 47/1024 [MB] (47 MBps) Copying: 94/1024 [MB] (47 MBps) Copying: 144/1024 [MB] (49 MBps) Copying: 190/1024 [MB] (46 MBps) Copying: 236/1024 [MB] (45 MBps) Copying: 282/1024 [MB] (46 MBps) Copying: 329/1024 [MB] (47 MBps) Copying: 372/1024 [MB] (43 MBps) Copying: 419/1024 [MB] (47 MBps) Copying: 466/1024 [MB] (46 MBps) Copying: 516/1024 [MB] (49 MBps) Copying: 563/1024 [MB] (47 MBps) Copying: 611/1024 [MB] (47 MBps) Copying: 659/1024 [MB] (47 MBps) Copying: 705/1024 [MB] (46 MBps) Copying: 757/1024 [MB] (52 MBps) Copying: 806/1024 [MB] (48 MBps) Copying: 857/1024 [MB] (50 MBps) Copying: 903/1024 [MB] (45 MBps) Copying: 952/1024 [MB] (49 MBps) Copying: 999/1024 [MB] (47 MBps) Copying: 1024/1024 [MB] (average 47 MBps)[2024-10-01 03:46:55.222294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.823 [2024-10-01 03:46:55.222779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:02.823 [2024-10-01 03:46:55.222984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:02.823 [2024-10-01 03:46:55.223044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.823 [2024-10-01 03:46:55.223115] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:02.823 [2024-10-01 03:46:55.232297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.823 [2024-10-01 03:46:55.232329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:02.823 [2024-10-01 03:46:55.232339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.145 ms 00:20:02.823 [2024-10-01 03:46:55.232350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.823 [2024-10-01 03:46:55.232552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.823 [2024-10-01 03:46:55.232562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:02.823 [2024-10-01 03:46:55.232570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.182 ms 00:20:02.823 [2024-10-01 03:46:55.232576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.823 [2024-10-01 03:46:55.235786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.823 [2024-10-01 03:46:55.235812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:02.823 [2024-10-01 03:46:55.235821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.198 ms 00:20:02.823 [2024-10-01 03:46:55.235828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.823 [2024-10-01 03:46:55.240678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.823 [2024-10-01 03:46:55.240703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:02.823 [2024-10-01 03:46:55.240713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.822 ms 00:20:02.823 [2024-10-01 03:46:55.240720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.823 [2024-10-01 03:46:55.259226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.823 [2024-10-01 03:46:55.259256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:02.823 [2024-10-01 03:46:55.259265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.471 ms 00:20:02.823 [2024-10-01 03:46:55.259271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.823 [2024-10-01 03:46:55.270915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.823 [2024-10-01 03:46:55.270943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:02.823 [2024-10-01 03:46:55.270953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.616 ms 00:20:02.823 [2024-10-01 03:46:55.270961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.823 [2024-10-01 03:46:55.322161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.823 [2024-10-01 03:46:55.322196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:02.823 [2024-10-01 03:46:55.322210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.168 ms 00:20:02.823 [2024-10-01 03:46:55.322216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.823 [2024-10-01 03:46:55.340231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.823 [2024-10-01 03:46:55.340262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:02.823 [2024-10-01 03:46:55.340271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.002 ms 00:20:02.823 [2024-10-01 03:46:55.340277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:02.823 [2024-10-01 03:46:55.357392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:02.823 [2024-10-01 03:46:55.357626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:02.823 [2024-10-01 03:46:55.357639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.087 ms 00:20:02.823 [2024-10-01 03:46:55.357646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.084 [2024-10-01 03:46:55.374569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.084 [2024-10-01 03:46:55.374595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:03.084 [2024-10-01 03:46:55.374604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.898 ms 00:20:03.084 [2024-10-01 03:46:55.374609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.084 [2024-10-01 03:46:55.391625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.084 [2024-10-01 03:46:55.391652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:03.084 [2024-10-01 03:46:55.391661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.967 ms 00:20:03.084 [2024-10-01 03:46:55.391667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.084 [2024-10-01 03:46:55.391695] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:03.084 [2024-10-01 03:46:55.391708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:20:03.084 [2024-10-01 03:46:55.391717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.391994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:03.084 [2024-10-01 03:46:55.392113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:03.085 [2024-10-01 03:46:55.392370] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:03.085 [2024-10-01 03:46:55.392377] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 3bc97d54-35af-4bbe-b537-d54a52a02875 00:20:03.085 [2024-10-01 03:46:55.392387] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:20:03.085 [2024-10-01 03:46:55.392393] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 9920 00:20:03.085 [2024-10-01 03:46:55.392400] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 8960 00:20:03.085 [2024-10-01 03:46:55.392407] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.1071 00:20:03.085 [2024-10-01 03:46:55.392413] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:03.085 [2024-10-01 03:46:55.392420] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:03.085 [2024-10-01 03:46:55.392426] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:03.085 [2024-10-01 03:46:55.392431] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:03.085 [2024-10-01 03:46:55.392436] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:03.085 [2024-10-01 03:46:55.392442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.085 [2024-10-01 03:46:55.392448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:03.085 [2024-10-01 03:46:55.392461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:20:03.085 [2024-10-01 03:46:55.392467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.402321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.085 [2024-10-01 03:46:55.402350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:03.085 [2024-10-01 03:46:55.402359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.841 ms 00:20:03.085 [2024-10-01 03:46:55.402366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.402672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:03.085 [2024-10-01 03:46:55.402681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:03.085 [2024-10-01 03:46:55.402690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.286 ms 00:20:03.085 [2024-10-01 03:46:55.402696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.425835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-10-01 03:46:55.425871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:03.085 [2024-10-01 03:46:55.425881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-10-01 03:46:55.425887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.425944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-10-01 03:46:55.425951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:03.085 [2024-10-01 03:46:55.425957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-10-01 03:46:55.425967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.426050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-10-01 03:46:55.426059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:03.085 [2024-10-01 03:46:55.426066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-10-01 03:46:55.426072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.426085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-10-01 03:46:55.426093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:03.085 [2024-10-01 03:46:55.426099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-10-01 03:46:55.426105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.488162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-10-01 03:46:55.488237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:03.085 [2024-10-01 03:46:55.488249] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-10-01 03:46:55.488256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.538830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-10-01 03:46:55.538883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:03.085 [2024-10-01 03:46:55.538895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-10-01 03:46:55.538906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.538966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-10-01 03:46:55.538975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:03.085 [2024-10-01 03:46:55.538982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-10-01 03:46:55.538988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.539048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-10-01 03:46:55.539057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:03.085 [2024-10-01 03:46:55.539063] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-10-01 03:46:55.539069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.539150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-10-01 03:46:55.539183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:03.085 [2024-10-01 03:46:55.539190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-10-01 03:46:55.539195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.539220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.085 [2024-10-01 03:46:55.539228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:03.085 [2024-10-01 03:46:55.539235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.085 [2024-10-01 03:46:55.539241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.085 [2024-10-01 03:46:55.539278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.086 [2024-10-01 03:46:55.539285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:03.086 [2024-10-01 03:46:55.539292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.086 [2024-10-01 03:46:55.539298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.086 [2024-10-01 03:46:55.539333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:03.086 [2024-10-01 03:46:55.539342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:03.086 [2024-10-01 03:46:55.539349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:03.086 [2024-10-01 03:46:55.539355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:03.086 [2024-10-01 03:46:55.539461] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 317.173 ms, result 0 00:20:03.655 00:20:03.655 00:20:03.913 03:46:56 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:06.442 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 74743 00:20:06.442 03:46:58 ftl.ftl_restore -- common/autotest_common.sh@950 -- # '[' -z 74743 ']' 00:20:06.442 Process with pid 74743 is not found 00:20:06.442 03:46:58 ftl.ftl_restore -- common/autotest_common.sh@954 -- # kill -0 74743 00:20:06.442 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (74743) - No such process 00:20:06.442 03:46:58 ftl.ftl_restore -- common/autotest_common.sh@977 -- # echo 'Process with pid 74743 is not found' 00:20:06.442 Remove shared memory files 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:20:06.442 03:46:58 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:20:06.442 ************************************ 00:20:06.442 END TEST ftl_restore 00:20:06.442 ************************************ 00:20:06.442 00:20:06.442 real 2m4.014s 00:20:06.442 user 1m53.865s 00:20:06.442 sys 0m11.822s 00:20:06.442 03:46:58 ftl.ftl_restore -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:06.442 03:46:58 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:06.442 03:46:58 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:06.442 03:46:58 ftl -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:06.443 03:46:58 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:06.443 03:46:58 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:06.443 ************************************ 00:20:06.443 START TEST ftl_dirty_shutdown 00:20:06.443 ************************************ 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:20:06.443 * Looking for test storage... 00:20:06.443 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:20:06.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.443 --rc genhtml_branch_coverage=1 00:20:06.443 --rc genhtml_function_coverage=1 00:20:06.443 --rc genhtml_legend=1 00:20:06.443 --rc geninfo_all_blocks=1 00:20:06.443 --rc geninfo_unexecuted_blocks=1 00:20:06.443 00:20:06.443 ' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:20:06.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.443 --rc genhtml_branch_coverage=1 00:20:06.443 --rc genhtml_function_coverage=1 00:20:06.443 --rc genhtml_legend=1 00:20:06.443 --rc geninfo_all_blocks=1 00:20:06.443 --rc geninfo_unexecuted_blocks=1 00:20:06.443 00:20:06.443 ' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:20:06.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.443 --rc genhtml_branch_coverage=1 00:20:06.443 --rc genhtml_function_coverage=1 00:20:06.443 --rc genhtml_legend=1 00:20:06.443 --rc geninfo_all_blocks=1 00:20:06.443 --rc geninfo_unexecuted_blocks=1 00:20:06.443 00:20:06.443 ' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:20:06.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:06.443 --rc genhtml_branch_coverage=1 00:20:06.443 --rc genhtml_function_coverage=1 00:20:06.443 --rc genhtml_legend=1 00:20:06.443 --rc geninfo_all_blocks=1 00:20:06.443 --rc geninfo_unexecuted_blocks=1 00:20:06.443 00:20:06.443 ' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=76116 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 76116 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@831 -- # '[' -z 76116 ']' 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:06.443 03:46:58 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:20:06.443 [2024-10-01 03:46:58.773334] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:06.443 [2024-10-01 03:46:58.773706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76116 ] 00:20:06.443 [2024-10-01 03:46:58.922372] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.702 [2024-10-01 03:46:59.133389] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.269 03:46:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.269 03:46:59 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # return 0 00:20:07.269 03:46:59 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:07.269 03:46:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:20:07.269 03:46:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:07.269 03:46:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:20:07.269 03:46:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:20:07.269 03:46:59 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:07.528 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:07.528 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:20:07.528 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:07.528 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:20:07.528 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:07.528 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:07.528 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:07.528 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:07.787 { 00:20:07.787 "name": "nvme0n1", 00:20:07.787 "aliases": [ 00:20:07.787 "73489d88-9a3d-4112-af40-3c35b80e9ed2" 00:20:07.787 ], 00:20:07.787 "product_name": "NVMe disk", 00:20:07.787 "block_size": 4096, 00:20:07.787 "num_blocks": 1310720, 00:20:07.787 "uuid": "73489d88-9a3d-4112-af40-3c35b80e9ed2", 00:20:07.787 "numa_id": -1, 00:20:07.787 "assigned_rate_limits": { 00:20:07.787 "rw_ios_per_sec": 0, 00:20:07.787 "rw_mbytes_per_sec": 0, 00:20:07.787 "r_mbytes_per_sec": 0, 00:20:07.787 "w_mbytes_per_sec": 0 00:20:07.787 }, 00:20:07.787 "claimed": true, 00:20:07.787 "claim_type": "read_many_write_one", 00:20:07.787 "zoned": false, 00:20:07.787 "supported_io_types": { 00:20:07.787 "read": true, 00:20:07.787 "write": true, 00:20:07.787 "unmap": true, 00:20:07.787 "flush": true, 00:20:07.787 "reset": true, 00:20:07.787 "nvme_admin": true, 00:20:07.787 "nvme_io": true, 00:20:07.787 "nvme_io_md": false, 00:20:07.787 "write_zeroes": true, 00:20:07.787 "zcopy": false, 00:20:07.787 "get_zone_info": false, 00:20:07.787 "zone_management": false, 00:20:07.787 "zone_append": false, 00:20:07.787 "compare": true, 00:20:07.787 "compare_and_write": false, 00:20:07.787 "abort": true, 00:20:07.787 "seek_hole": false, 00:20:07.787 "seek_data": false, 00:20:07.787 "copy": true, 00:20:07.787 "nvme_iov_md": false 00:20:07.787 }, 00:20:07.787 "driver_specific": { 00:20:07.787 "nvme": [ 00:20:07.787 { 00:20:07.787 "pci_address": "0000:00:11.0", 00:20:07.787 "trid": { 00:20:07.787 "trtype": "PCIe", 00:20:07.787 "traddr": "0000:00:11.0" 00:20:07.787 }, 00:20:07.787 "ctrlr_data": { 00:20:07.787 "cntlid": 0, 00:20:07.787 "vendor_id": "0x1b36", 00:20:07.787 "model_number": "QEMU NVMe Ctrl", 00:20:07.787 "serial_number": "12341", 00:20:07.787 "firmware_revision": "8.0.0", 00:20:07.787 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:07.787 "oacs": { 00:20:07.787 "security": 0, 00:20:07.787 "format": 1, 00:20:07.787 "firmware": 0, 00:20:07.787 "ns_manage": 1 00:20:07.787 }, 00:20:07.787 "multi_ctrlr": false, 00:20:07.787 "ana_reporting": false 00:20:07.787 }, 00:20:07.787 "vs": { 00:20:07.787 "nvme_version": "1.4" 00:20:07.787 }, 00:20:07.787 "ns_data": { 00:20:07.787 "id": 1, 00:20:07.787 "can_share": false 00:20:07.787 } 00:20:07.787 } 00:20:07.787 ], 00:20:07.787 "mp_policy": "active_passive" 00:20:07.787 } 00:20:07.787 } 00:20:07.787 ]' 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:07.787 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:08.047 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=a1dfdd74-0b64-4beb-8888-2f9dc3972e0b 00:20:08.047 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:20:08.047 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a1dfdd74-0b64-4beb-8888-2f9dc3972e0b 00:20:08.305 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:08.563 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=719a4a96-9d83-4c2d-bba8-0ecf361227c5 00:20:08.563 03:47:00 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 719a4a96-9d83-4c2d-bba8-0ecf361227c5 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:08.823 { 00:20:08.823 "name": "95d78b36-b6c5-4dc4-9071-9938e3037c5c", 00:20:08.823 "aliases": [ 00:20:08.823 "lvs/nvme0n1p0" 00:20:08.823 ], 00:20:08.823 "product_name": "Logical Volume", 00:20:08.823 "block_size": 4096, 00:20:08.823 "num_blocks": 26476544, 00:20:08.823 "uuid": "95d78b36-b6c5-4dc4-9071-9938e3037c5c", 00:20:08.823 "assigned_rate_limits": { 00:20:08.823 "rw_ios_per_sec": 0, 00:20:08.823 "rw_mbytes_per_sec": 0, 00:20:08.823 "r_mbytes_per_sec": 0, 00:20:08.823 "w_mbytes_per_sec": 0 00:20:08.823 }, 00:20:08.823 "claimed": false, 00:20:08.823 "zoned": false, 00:20:08.823 "supported_io_types": { 00:20:08.823 "read": true, 00:20:08.823 "write": true, 00:20:08.823 "unmap": true, 00:20:08.823 "flush": false, 00:20:08.823 "reset": true, 00:20:08.823 "nvme_admin": false, 00:20:08.823 "nvme_io": false, 00:20:08.823 "nvme_io_md": false, 00:20:08.823 "write_zeroes": true, 00:20:08.823 "zcopy": false, 00:20:08.823 "get_zone_info": false, 00:20:08.823 "zone_management": false, 00:20:08.823 "zone_append": false, 00:20:08.823 "compare": false, 00:20:08.823 "compare_and_write": false, 00:20:08.823 "abort": false, 00:20:08.823 "seek_hole": true, 00:20:08.823 "seek_data": true, 00:20:08.823 "copy": false, 00:20:08.823 "nvme_iov_md": false 00:20:08.823 }, 00:20:08.823 "driver_specific": { 00:20:08.823 "lvol": { 00:20:08.823 "lvol_store_uuid": "719a4a96-9d83-4c2d-bba8-0ecf361227c5", 00:20:08.823 "base_bdev": "nvme0n1", 00:20:08.823 "thin_provision": true, 00:20:08.823 "num_allocated_clusters": 0, 00:20:08.823 "snapshot": false, 00:20:08.823 "clone": false, 00:20:08.823 "esnap_clone": false 00:20:08.823 } 00:20:08.823 } 00:20:08.823 } 00:20:08.823 ]' 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:08.823 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:09.082 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:09.082 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:09.082 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:09.082 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:20:09.082 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:20:09.082 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:09.341 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:09.341 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:09.341 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:09.341 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:09.341 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:09.341 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:09.341 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:09.341 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:09.341 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:09.341 { 00:20:09.341 "name": "95d78b36-b6c5-4dc4-9071-9938e3037c5c", 00:20:09.341 "aliases": [ 00:20:09.341 "lvs/nvme0n1p0" 00:20:09.341 ], 00:20:09.341 "product_name": "Logical Volume", 00:20:09.341 "block_size": 4096, 00:20:09.341 "num_blocks": 26476544, 00:20:09.341 "uuid": "95d78b36-b6c5-4dc4-9071-9938e3037c5c", 00:20:09.341 "assigned_rate_limits": { 00:20:09.341 "rw_ios_per_sec": 0, 00:20:09.341 "rw_mbytes_per_sec": 0, 00:20:09.341 "r_mbytes_per_sec": 0, 00:20:09.341 "w_mbytes_per_sec": 0 00:20:09.341 }, 00:20:09.341 "claimed": false, 00:20:09.341 "zoned": false, 00:20:09.341 "supported_io_types": { 00:20:09.341 "read": true, 00:20:09.341 "write": true, 00:20:09.341 "unmap": true, 00:20:09.341 "flush": false, 00:20:09.341 "reset": true, 00:20:09.341 "nvme_admin": false, 00:20:09.341 "nvme_io": false, 00:20:09.341 "nvme_io_md": false, 00:20:09.341 "write_zeroes": true, 00:20:09.341 "zcopy": false, 00:20:09.341 "get_zone_info": false, 00:20:09.341 "zone_management": false, 00:20:09.341 "zone_append": false, 00:20:09.341 "compare": false, 00:20:09.341 "compare_and_write": false, 00:20:09.341 "abort": false, 00:20:09.341 "seek_hole": true, 00:20:09.341 "seek_data": true, 00:20:09.341 "copy": false, 00:20:09.341 "nvme_iov_md": false 00:20:09.341 }, 00:20:09.341 "driver_specific": { 00:20:09.341 "lvol": { 00:20:09.341 "lvol_store_uuid": "719a4a96-9d83-4c2d-bba8-0ecf361227c5", 00:20:09.341 "base_bdev": "nvme0n1", 00:20:09.341 "thin_provision": true, 00:20:09.341 "num_allocated_clusters": 0, 00:20:09.341 "snapshot": false, 00:20:09.341 "clone": false, 00:20:09.341 "esnap_clone": false 00:20:09.341 } 00:20:09.341 } 00:20:09.341 } 00:20:09.341 ]' 00:20:09.341 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:09.601 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:09.601 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:09.601 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:09.601 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:09.601 03:47:01 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:09.601 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:20:09.601 03:47:01 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:09.601 03:47:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:20:09.601 03:47:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:09.601 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:09.601 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:20:09.601 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:20:09.601 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:20:09.601 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 95d78b36-b6c5-4dc4-9071-9938e3037c5c 00:20:09.859 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:20:09.859 { 00:20:09.859 "name": "95d78b36-b6c5-4dc4-9071-9938e3037c5c", 00:20:09.859 "aliases": [ 00:20:09.859 "lvs/nvme0n1p0" 00:20:09.859 ], 00:20:09.859 "product_name": "Logical Volume", 00:20:09.859 "block_size": 4096, 00:20:09.859 "num_blocks": 26476544, 00:20:09.859 "uuid": "95d78b36-b6c5-4dc4-9071-9938e3037c5c", 00:20:09.859 "assigned_rate_limits": { 00:20:09.859 "rw_ios_per_sec": 0, 00:20:09.859 "rw_mbytes_per_sec": 0, 00:20:09.859 "r_mbytes_per_sec": 0, 00:20:09.859 "w_mbytes_per_sec": 0 00:20:09.859 }, 00:20:09.859 "claimed": false, 00:20:09.859 "zoned": false, 00:20:09.859 "supported_io_types": { 00:20:09.859 "read": true, 00:20:09.859 "write": true, 00:20:09.859 "unmap": true, 00:20:09.859 "flush": false, 00:20:09.859 "reset": true, 00:20:09.859 "nvme_admin": false, 00:20:09.859 "nvme_io": false, 00:20:09.859 "nvme_io_md": false, 00:20:09.859 "write_zeroes": true, 00:20:09.859 "zcopy": false, 00:20:09.859 "get_zone_info": false, 00:20:09.859 "zone_management": false, 00:20:09.859 "zone_append": false, 00:20:09.859 "compare": false, 00:20:09.859 "compare_and_write": false, 00:20:09.859 "abort": false, 00:20:09.859 "seek_hole": true, 00:20:09.859 "seek_data": true, 00:20:09.859 "copy": false, 00:20:09.859 "nvme_iov_md": false 00:20:09.859 }, 00:20:09.859 "driver_specific": { 00:20:09.859 "lvol": { 00:20:09.859 "lvol_store_uuid": "719a4a96-9d83-4c2d-bba8-0ecf361227c5", 00:20:09.859 "base_bdev": "nvme0n1", 00:20:09.859 "thin_provision": true, 00:20:09.859 "num_allocated_clusters": 0, 00:20:09.859 "snapshot": false, 00:20:09.859 "clone": false, 00:20:09.859 "esnap_clone": false 00:20:09.859 } 00:20:09.860 } 00:20:09.860 } 00:20:09.860 ]' 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # nb=26476544 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # echo 103424 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 95d78b36-b6c5-4dc4-9071-9938e3037c5c --l2p_dram_limit 10' 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:09.860 03:47:02 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 95d78b36-b6c5-4dc4-9071-9938e3037c5c --l2p_dram_limit 10 -c nvc0n1p0 00:20:10.117 [2024-10-01 03:47:02.557396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.117 [2024-10-01 03:47:02.557464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:10.117 [2024-10-01 03:47:02.557481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:10.117 [2024-10-01 03:47:02.557488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.117 [2024-10-01 03:47:02.557544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.117 [2024-10-01 03:47:02.557552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:10.117 [2024-10-01 03:47:02.557560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:10.117 [2024-10-01 03:47:02.557567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.117 [2024-10-01 03:47:02.557590] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:10.117 [2024-10-01 03:47:02.558227] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:10.117 [2024-10-01 03:47:02.558246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.117 [2024-10-01 03:47:02.558253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:10.117 [2024-10-01 03:47:02.558261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:20:10.117 [2024-10-01 03:47:02.558269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.117 [2024-10-01 03:47:02.558305] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID cad42eb8-b470-467b-8b2f-e8c0d12f6694 00:20:10.117 [2024-10-01 03:47:02.559667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.117 [2024-10-01 03:47:02.559846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:10.117 [2024-10-01 03:47:02.559862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:20:10.117 [2024-10-01 03:47:02.559871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.117 [2024-10-01 03:47:02.566845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.117 [2024-10-01 03:47:02.566982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:10.117 [2024-10-01 03:47:02.566994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.899 ms 00:20:10.117 [2024-10-01 03:47:02.567011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.117 [2024-10-01 03:47:02.567091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.117 [2024-10-01 03:47:02.567101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:10.117 [2024-10-01 03:47:02.567109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:20:10.118 [2024-10-01 03:47:02.567121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.118 [2024-10-01 03:47:02.567171] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.118 [2024-10-01 03:47:02.567181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:10.118 [2024-10-01 03:47:02.567188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:10.118 [2024-10-01 03:47:02.567195] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.118 [2024-10-01 03:47:02.567214] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:10.118 [2024-10-01 03:47:02.570492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.118 [2024-10-01 03:47:02.570517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:10.118 [2024-10-01 03:47:02.570528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.283 ms 00:20:10.118 [2024-10-01 03:47:02.570534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.118 [2024-10-01 03:47:02.570572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.118 [2024-10-01 03:47:02.570580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:10.118 [2024-10-01 03:47:02.570588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:10.118 [2024-10-01 03:47:02.570596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.118 [2024-10-01 03:47:02.570612] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:10.118 [2024-10-01 03:47:02.570724] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:10.118 [2024-10-01 03:47:02.570738] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:10.118 [2024-10-01 03:47:02.570748] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:10.118 [2024-10-01 03:47:02.570761] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:10.118 [2024-10-01 03:47:02.570768] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:10.118 [2024-10-01 03:47:02.570777] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:10.118 [2024-10-01 03:47:02.570783] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:10.118 [2024-10-01 03:47:02.570791] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:10.118 [2024-10-01 03:47:02.570796] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:10.118 [2024-10-01 03:47:02.570804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.118 [2024-10-01 03:47:02.570816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:10.118 [2024-10-01 03:47:02.570823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:10.118 [2024-10-01 03:47:02.570829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.118 [2024-10-01 03:47:02.570897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.118 [2024-10-01 03:47:02.570906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:10.118 [2024-10-01 03:47:02.570914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:20:10.118 [2024-10-01 03:47:02.570920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.118 [2024-10-01 03:47:02.570996] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:10.118 [2024-10-01 03:47:02.571019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:10.118 [2024-10-01 03:47:02.571028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.118 [2024-10-01 03:47:02.571035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571043] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:10.118 [2024-10-01 03:47:02.571048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:10.118 [2024-10-01 03:47:02.571060] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:10.118 [2024-10-01 03:47:02.571068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571074] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.118 [2024-10-01 03:47:02.571081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:10.118 [2024-10-01 03:47:02.571088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:10.118 [2024-10-01 03:47:02.571095] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:10.118 [2024-10-01 03:47:02.571100] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:10.118 [2024-10-01 03:47:02.571108] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:10.118 [2024-10-01 03:47:02.571113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:10.118 [2024-10-01 03:47:02.571127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:10.118 [2024-10-01 03:47:02.571133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:10.118 [2024-10-01 03:47:02.571147] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.118 [2024-10-01 03:47:02.571159] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:10.118 [2024-10-01 03:47:02.571165] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571171] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.118 [2024-10-01 03:47:02.571176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:10.118 [2024-10-01 03:47:02.571182] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.118 [2024-10-01 03:47:02.571193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:10.118 [2024-10-01 03:47:02.571198] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571205] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:10.118 [2024-10-01 03:47:02.571210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:10.118 [2024-10-01 03:47:02.571218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.118 [2024-10-01 03:47:02.571229] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:10.118 [2024-10-01 03:47:02.571234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:10.118 [2024-10-01 03:47:02.571240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:10.118 [2024-10-01 03:47:02.571246] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:10.118 [2024-10-01 03:47:02.571252] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:10.118 [2024-10-01 03:47:02.571257] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571263] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:10.118 [2024-10-01 03:47:02.571269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:10.118 [2024-10-01 03:47:02.571275] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571281] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:10.118 [2024-10-01 03:47:02.571288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:10.118 [2024-10-01 03:47:02.571295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:10.118 [2024-10-01 03:47:02.571303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:10.118 [2024-10-01 03:47:02.571311] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:10.118 [2024-10-01 03:47:02.571319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:10.118 [2024-10-01 03:47:02.571324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:10.118 [2024-10-01 03:47:02.571331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:10.118 [2024-10-01 03:47:02.571336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:10.118 [2024-10-01 03:47:02.571343] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:10.118 [2024-10-01 03:47:02.571351] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:10.118 [2024-10-01 03:47:02.571360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.118 [2024-10-01 03:47:02.571367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:10.118 [2024-10-01 03:47:02.571375] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:10.118 [2024-10-01 03:47:02.571381] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:10.118 [2024-10-01 03:47:02.571388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:10.118 [2024-10-01 03:47:02.571394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:10.118 [2024-10-01 03:47:02.571400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:10.118 [2024-10-01 03:47:02.571406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:10.118 [2024-10-01 03:47:02.571413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:10.118 [2024-10-01 03:47:02.571418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:10.118 [2024-10-01 03:47:02.571427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:10.118 [2024-10-01 03:47:02.571432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:10.118 [2024-10-01 03:47:02.571440] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:10.118 [2024-10-01 03:47:02.571445] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:10.118 [2024-10-01 03:47:02.571453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:10.118 [2024-10-01 03:47:02.571458] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:10.118 [2024-10-01 03:47:02.571467] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:10.118 [2024-10-01 03:47:02.571473] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:10.118 [2024-10-01 03:47:02.571481] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:10.118 [2024-10-01 03:47:02.571487] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:10.118 [2024-10-01 03:47:02.571495] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:10.118 [2024-10-01 03:47:02.571501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:10.118 [2024-10-01 03:47:02.571508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:10.118 [2024-10-01 03:47:02.571514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:20:10.118 [2024-10-01 03:47:02.571522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:10.118 [2024-10-01 03:47:02.571569] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:10.118 [2024-10-01 03:47:02.571581] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:12.644 [2024-10-01 03:47:04.797295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.797376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:12.644 [2024-10-01 03:47:04.797393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2225.715 ms 00:20:12.644 [2024-10-01 03:47:04.797404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:04.825773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.825838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:12.644 [2024-10-01 03:47:04.825853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.149 ms 00:20:12.644 [2024-10-01 03:47:04.825863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:04.826026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.826040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:12.644 [2024-10-01 03:47:04.826050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:12.644 [2024-10-01 03:47:04.826065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:04.866071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.866339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:12.644 [2024-10-01 03:47:04.866368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.968 ms 00:20:12.644 [2024-10-01 03:47:04.866381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:04.866435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.866448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:12.644 [2024-10-01 03:47:04.866458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:12.644 [2024-10-01 03:47:04.866477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:04.866983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.867029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:12.644 [2024-10-01 03:47:04.867043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.404 ms 00:20:12.644 [2024-10-01 03:47:04.867057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:04.867199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.867212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:12.644 [2024-10-01 03:47:04.867222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:20:12.644 [2024-10-01 03:47:04.867235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:04.882512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.882547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:12.644 [2024-10-01 03:47:04.882565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.254 ms 00:20:12.644 [2024-10-01 03:47:04.882575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:04.894763] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:12.644 [2024-10-01 03:47:04.897965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.898180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:12.644 [2024-10-01 03:47:04.898203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.302 ms 00:20:12.644 [2024-10-01 03:47:04.898211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:04.961724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.961979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:12.644 [2024-10-01 03:47:04.962023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.479 ms 00:20:12.644 [2024-10-01 03:47:04.962034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:04.962228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.962240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:12.644 [2024-10-01 03:47:04.962254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.151 ms 00:20:12.644 [2024-10-01 03:47:04.962262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:04.985668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:04.985712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:12.644 [2024-10-01 03:47:04.985726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.356 ms 00:20:12.644 [2024-10-01 03:47:04.985735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:05.008100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:05.008282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:12.644 [2024-10-01 03:47:05.008304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.324 ms 00:20:12.644 [2024-10-01 03:47:05.008311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:05.008896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:05.008915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:12.644 [2024-10-01 03:47:05.008926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:20:12.644 [2024-10-01 03:47:05.008933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.644 [2024-10-01 03:47:05.077937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.644 [2024-10-01 03:47:05.077987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:12.644 [2024-10-01 03:47:05.078019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 68.961 ms 00:20:12.645 [2024-10-01 03:47:05.078031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.645 [2024-10-01 03:47:05.102398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.645 [2024-10-01 03:47:05.102445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:12.645 [2024-10-01 03:47:05.102460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.285 ms 00:20:12.645 [2024-10-01 03:47:05.102468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.645 [2024-10-01 03:47:05.125770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.645 [2024-10-01 03:47:05.125816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:12.645 [2024-10-01 03:47:05.125831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.269 ms 00:20:12.645 [2024-10-01 03:47:05.125839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.645 [2024-10-01 03:47:05.149437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.645 [2024-10-01 03:47:05.149638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:12.645 [2024-10-01 03:47:05.149661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.567 ms 00:20:12.645 [2024-10-01 03:47:05.149670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.645 [2024-10-01 03:47:05.149702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.645 [2024-10-01 03:47:05.149711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:12.645 [2024-10-01 03:47:05.149727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:12.645 [2024-10-01 03:47:05.149735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.645 [2024-10-01 03:47:05.149823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:12.645 [2024-10-01 03:47:05.149834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:12.645 [2024-10-01 03:47:05.149844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:20:12.645 [2024-10-01 03:47:05.149852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:12.645 [2024-10-01 03:47:05.150891] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2593.009 ms, result 0 00:20:12.645 { 00:20:12.645 "name": "ftl0", 00:20:12.645 "uuid": "cad42eb8-b470-467b-8b2f-e8c0d12f6694" 00:20:12.645 } 00:20:12.645 03:47:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:20:12.645 03:47:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:12.903 03:47:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:20:12.903 03:47:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:20:12.903 03:47:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:20:13.161 /dev/nbd0 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@869 -- # local i 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # break 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:20:13.162 1+0 records in 00:20:13.162 1+0 records out 00:20:13.162 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316378 s, 12.9 MB/s 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # size=4096 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # return 0 00:20:13.162 03:47:05 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:20:13.162 [2024-10-01 03:47:05.690898] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:13.162 [2024-10-01 03:47:05.691050] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76248 ] 00:20:13.421 [2024-10-01 03:47:05.840992] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:13.679 [2024-10-01 03:47:06.021088] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:18.420  Copying: 257/1024 [MB] (257 MBps) Copying: 516/1024 [MB] (258 MBps) Copying: 768/1024 [MB] (252 MBps) Copying: 1020/1024 [MB] (251 MBps) Copying: 1024/1024 [MB] (average 255 MBps) 00:20:18.420 00:20:18.420 03:47:10 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:20.321 03:47:12 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:20:20.580 [2024-10-01 03:47:12.930033] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:20.580 [2024-10-01 03:47:12.930168] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76327 ] 00:20:20.580 [2024-10-01 03:47:13.080906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.838 [2024-10-01 03:47:13.263874] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.235  Copying: 35/1024 [MB] (35 MBps) Copying: 69/1024 [MB] (34 MBps) Copying: 99/1024 [MB] (29 MBps) Copying: 129/1024 [MB] (30 MBps) Copying: 161/1024 [MB] (32 MBps) Copying: 190/1024 [MB] (28 MBps) Copying: 224/1024 [MB] (33 MBps) Copying: 258/1024 [MB] (34 MBps) Copying: 290/1024 [MB] (31 MBps) Copying: 325/1024 [MB] (35 MBps) Copying: 360/1024 [MB] (35 MBps) Copying: 396/1024 [MB] (35 MBps) Copying: 432/1024 [MB] (36 MBps) Copying: 468/1024 [MB] (35 MBps) Copying: 499/1024 [MB] (31 MBps) Copying: 534/1024 [MB] (35 MBps) Copying: 568/1024 [MB] (33 MBps) Copying: 603/1024 [MB] (35 MBps) Copying: 635/1024 [MB] (31 MBps) Copying: 665/1024 [MB] (30 MBps) Copying: 700/1024 [MB] (35 MBps) Copying: 735/1024 [MB] (35 MBps) Copying: 771/1024 [MB] (35 MBps) Copying: 806/1024 [MB] (35 MBps) Copying: 842/1024 [MB] (35 MBps) Copying: 874/1024 [MB] (31 MBps) Copying: 906/1024 [MB] (32 MBps) Copying: 941/1024 [MB] (35 MBps) Copying: 976/1024 [MB] (34 MBps) Copying: 1011/1024 [MB] (35 MBps) Copying: 1024/1024 [MB] (average 33 MBps) 00:20:52.235 00:20:52.235 03:47:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:20:52.235 03:47:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:20:52.495 03:47:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:52.495 [2024-10-01 03:47:44.988345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.495 [2024-10-01 03:47:44.988537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:52.495 [2024-10-01 03:47:44.988688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:52.495 [2024-10-01 03:47:44.988713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.495 [2024-10-01 03:47:44.988749] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:52.495 [2024-10-01 03:47:44.991039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.495 [2024-10-01 03:47:44.991145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:52.495 [2024-10-01 03:47:44.991198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.254 ms 00:20:52.495 [2024-10-01 03:47:44.991218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.495 [2024-10-01 03:47:44.993191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.495 [2024-10-01 03:47:44.993286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:52.495 [2024-10-01 03:47:44.993337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.937 ms 00:20:52.495 [2024-10-01 03:47:44.993356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.495 [2024-10-01 03:47:45.005599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.495 [2024-10-01 03:47:45.005695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:52.495 [2024-10-01 03:47:45.005785] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.214 ms 00:20:52.495 [2024-10-01 03:47:45.005804] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.495 [2024-10-01 03:47:45.010498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.495 [2024-10-01 03:47:45.010595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:52.495 [2024-10-01 03:47:45.010678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.657 ms 00:20:52.495 [2024-10-01 03:47:45.010696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.495 [2024-10-01 03:47:45.029404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.495 [2024-10-01 03:47:45.029498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:52.495 [2024-10-01 03:47:45.029542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.639 ms 00:20:52.495 [2024-10-01 03:47:45.029561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.495 [2024-10-01 03:47:45.042163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.495 [2024-10-01 03:47:45.042259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:52.495 [2024-10-01 03:47:45.042331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.562 ms 00:20:52.495 [2024-10-01 03:47:45.042349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.495 [2024-10-01 03:47:45.042486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.495 [2024-10-01 03:47:45.042814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:52.495 [2024-10-01 03:47:45.042863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:20:52.495 [2024-10-01 03:47:45.042881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.755 [2024-10-01 03:47:45.060833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.755 [2024-10-01 03:47:45.060926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:52.755 [2024-10-01 03:47:45.060968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.926 ms 00:20:52.755 [2024-10-01 03:47:45.060987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.755 [2024-10-01 03:47:45.078337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.755 [2024-10-01 03:47:45.078426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:52.755 [2024-10-01 03:47:45.078468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.305 ms 00:20:52.755 [2024-10-01 03:47:45.078485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.755 [2024-10-01 03:47:45.095618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.755 [2024-10-01 03:47:45.095708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:52.755 [2024-10-01 03:47:45.095750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.096 ms 00:20:52.755 [2024-10-01 03:47:45.095769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.755 [2024-10-01 03:47:45.112691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.755 [2024-10-01 03:47:45.112787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:52.755 [2024-10-01 03:47:45.112863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.856 ms 00:20:52.755 [2024-10-01 03:47:45.112880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.755 [2024-10-01 03:47:45.112914] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:52.755 [2024-10-01 03:47:45.112936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.112962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.112985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:52.755 [2024-10-01 03:47:45.113577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.113996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:52.756 [2024-10-01 03:47:45.114692] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:52.756 [2024-10-01 03:47:45.114701] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cad42eb8-b470-467b-8b2f-e8c0d12f6694 00:20:52.756 [2024-10-01 03:47:45.114708] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:52.756 [2024-10-01 03:47:45.114717] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:52.756 [2024-10-01 03:47:45.114722] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:52.756 [2024-10-01 03:47:45.114729] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:52.756 [2024-10-01 03:47:45.114735] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:52.756 [2024-10-01 03:47:45.114742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:52.757 [2024-10-01 03:47:45.114748] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:52.757 [2024-10-01 03:47:45.114755] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:52.757 [2024-10-01 03:47:45.114760] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:52.757 [2024-10-01 03:47:45.114766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.757 [2024-10-01 03:47:45.114772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:52.757 [2024-10-01 03:47:45.114781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.854 ms 00:20:52.757 [2024-10-01 03:47:45.114787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.124798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.757 [2024-10-01 03:47:45.124825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:52.757 [2024-10-01 03:47:45.124835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.981 ms 00:20:52.757 [2024-10-01 03:47:45.124842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.125155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:52.757 [2024-10-01 03:47:45.125164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:52.757 [2024-10-01 03:47:45.125172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:20:52.757 [2024-10-01 03:47:45.125180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.155917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.156087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:52.757 [2024-10-01 03:47:45.156104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.156110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.156163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.156171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:52.757 [2024-10-01 03:47:45.156179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.156187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.156245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.156252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:52.757 [2024-10-01 03:47:45.156260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.156266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.156285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.156291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:52.757 [2024-10-01 03:47:45.156298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.156304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.219055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.219245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:52.757 [2024-10-01 03:47:45.219263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.219270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.270037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.270215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:52.757 [2024-10-01 03:47:45.270233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.270243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.270341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.270349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:52.757 [2024-10-01 03:47:45.270358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.270366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.270407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.270415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:52.757 [2024-10-01 03:47:45.270423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.270429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.270511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.270520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:52.757 [2024-10-01 03:47:45.270528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.270535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.270563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.270580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:52.757 [2024-10-01 03:47:45.270588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.270594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.270632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.270639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:52.757 [2024-10-01 03:47:45.270647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.270654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.270697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:52.757 [2024-10-01 03:47:45.270704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:52.757 [2024-10-01 03:47:45.270712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:52.757 [2024-10-01 03:47:45.270718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:52.757 [2024-10-01 03:47:45.270838] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 282.457 ms, result 0 00:20:52.757 true 00:20:52.757 03:47:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 76116 00:20:52.757 03:47:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid76116 00:20:52.757 03:47:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:20:53.015 [2024-10-01 03:47:45.362073] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:53.016 [2024-10-01 03:47:45.362200] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76673 ] 00:20:53.016 [2024-10-01 03:47:45.512709] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:53.274 [2024-10-01 03:47:45.691202] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.393  Copying: 256/1024 [MB] (256 MBps) Copying: 514/1024 [MB] (258 MBps) Copying: 769/1024 [MB] (254 MBps) Copying: 1022/1024 [MB] (253 MBps) Copying: 1024/1024 [MB] (average 255 MBps) 00:20:58.393 00:20:58.393 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 76116 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:20:58.393 03:47:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:58.393 [2024-10-01 03:47:50.660686] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:20:58.393 [2024-10-01 03:47:50.660813] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76732 ] 00:20:58.393 [2024-10-01 03:47:50.809738] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:58.651 [2024-10-01 03:47:50.980937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.910 [2024-10-01 03:47:51.211211] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.910 [2024-10-01 03:47:51.211275] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.910 [2024-10-01 03:47:51.274374] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:20:58.910 [2024-10-01 03:47:51.275060] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:20:58.910 [2024-10-01 03:47:51.275323] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:20:58.910 [2024-10-01 03:47:51.458670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.910 [2024-10-01 03:47:51.458720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:58.910 [2024-10-01 03:47:51.458732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:58.910 [2024-10-01 03:47:51.458738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.910 [2024-10-01 03:47:51.458778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.910 [2024-10-01 03:47:51.458786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.910 [2024-10-01 03:47:51.458793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:58.910 [2024-10-01 03:47:51.458801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.910 [2024-10-01 03:47:51.458815] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:58.910 [2024-10-01 03:47:51.459370] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:58.910 [2024-10-01 03:47:51.459391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.910 [2024-10-01 03:47:51.459398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.910 [2024-10-01 03:47:51.459405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.580 ms 00:20:58.910 [2024-10-01 03:47:51.459411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.170 [2024-10-01 03:47:51.460703] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:59.170 [2024-10-01 03:47:51.471037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.170 [2024-10-01 03:47:51.471065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:59.170 [2024-10-01 03:47:51.471076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.335 ms 00:20:59.170 [2024-10-01 03:47:51.471083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.170 [2024-10-01 03:47:51.471135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.170 [2024-10-01 03:47:51.471145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:59.170 [2024-10-01 03:47:51.471152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:59.170 [2024-10-01 03:47:51.471158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.170 [2024-10-01 03:47:51.477390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.170 [2024-10-01 03:47:51.477577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.170 [2024-10-01 03:47:51.477591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.192 ms 00:20:59.170 [2024-10-01 03:47:51.477597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.170 [2024-10-01 03:47:51.477662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.170 [2024-10-01 03:47:51.477670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.170 [2024-10-01 03:47:51.477678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:20:59.170 [2024-10-01 03:47:51.477684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.170 [2024-10-01 03:47:51.477733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.170 [2024-10-01 03:47:51.477742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:59.170 [2024-10-01 03:47:51.477749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:59.170 [2024-10-01 03:47:51.477756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.170 [2024-10-01 03:47:51.477774] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:59.170 [2024-10-01 03:47:51.480726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.170 [2024-10-01 03:47:51.480844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.170 [2024-10-01 03:47:51.480857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.957 ms 00:20:59.170 [2024-10-01 03:47:51.480864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.170 [2024-10-01 03:47:51.480895] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.170 [2024-10-01 03:47:51.480903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:59.170 [2024-10-01 03:47:51.480909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:59.170 [2024-10-01 03:47:51.480916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.170 [2024-10-01 03:47:51.480933] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:59.170 [2024-10-01 03:47:51.480950] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:59.170 [2024-10-01 03:47:51.480980] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:59.170 [2024-10-01 03:47:51.480996] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:59.170 [2024-10-01 03:47:51.481093] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:59.170 [2024-10-01 03:47:51.481103] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:59.170 [2024-10-01 03:47:51.481112] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:59.170 [2024-10-01 03:47:51.481121] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:59.170 [2024-10-01 03:47:51.481128] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:59.170 [2024-10-01 03:47:51.481134] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:59.170 [2024-10-01 03:47:51.481141] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:59.170 [2024-10-01 03:47:51.481147] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:59.170 [2024-10-01 03:47:51.481155] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:59.170 [2024-10-01 03:47:51.481161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.170 [2024-10-01 03:47:51.481169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:59.170 [2024-10-01 03:47:51.481175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.231 ms 00:20:59.170 [2024-10-01 03:47:51.481181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.170 [2024-10-01 03:47:51.481245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.170 [2024-10-01 03:47:51.481253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:59.170 [2024-10-01 03:47:51.481260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:59.170 [2024-10-01 03:47:51.481266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.170 [2024-10-01 03:47:51.481344] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:59.170 [2024-10-01 03:47:51.481352] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:59.170 [2024-10-01 03:47:51.481361] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:59.170 [2024-10-01 03:47:51.481367] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.170 [2024-10-01 03:47:51.481373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:59.170 [2024-10-01 03:47:51.481379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:59.170 [2024-10-01 03:47:51.481384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:59.170 [2024-10-01 03:47:51.481390] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:59.170 [2024-10-01 03:47:51.481396] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:59.170 [2024-10-01 03:47:51.481408] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:59.170 [2024-10-01 03:47:51.481414] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:59.170 [2024-10-01 03:47:51.481420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:59.170 [2024-10-01 03:47:51.481426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:59.170 [2024-10-01 03:47:51.481432] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:59.171 [2024-10-01 03:47:51.481438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:59.171 [2024-10-01 03:47:51.481443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.171 [2024-10-01 03:47:51.481449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:59.171 [2024-10-01 03:47:51.481454] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:59.171 [2024-10-01 03:47:51.481460] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.171 [2024-10-01 03:47:51.481466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:59.171 [2024-10-01 03:47:51.481471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:59.171 [2024-10-01 03:47:51.481477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.171 [2024-10-01 03:47:51.481482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:59.171 [2024-10-01 03:47:51.481487] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:59.171 [2024-10-01 03:47:51.481493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.171 [2024-10-01 03:47:51.481498] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:59.171 [2024-10-01 03:47:51.481503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:59.171 [2024-10-01 03:47:51.481508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.171 [2024-10-01 03:47:51.481514] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:59.171 [2024-10-01 03:47:51.481519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:59.171 [2024-10-01 03:47:51.481524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:59.171 [2024-10-01 03:47:51.481529] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:59.171 [2024-10-01 03:47:51.481535] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:59.171 [2024-10-01 03:47:51.481540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:59.171 [2024-10-01 03:47:51.481546] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:59.171 [2024-10-01 03:47:51.481551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:59.171 [2024-10-01 03:47:51.481555] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:59.171 [2024-10-01 03:47:51.481560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:59.171 [2024-10-01 03:47:51.481566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:59.171 [2024-10-01 03:47:51.481571] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.171 [2024-10-01 03:47:51.481576] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:59.171 [2024-10-01 03:47:51.481581] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:59.171 [2024-10-01 03:47:51.481586] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.171 [2024-10-01 03:47:51.481593] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:59.171 [2024-10-01 03:47:51.481602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:59.171 [2024-10-01 03:47:51.481609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:59.171 [2024-10-01 03:47:51.481615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:59.171 [2024-10-01 03:47:51.481621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:59.171 [2024-10-01 03:47:51.481627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:59.171 [2024-10-01 03:47:51.481632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:59.171 [2024-10-01 03:47:51.481637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:59.171 [2024-10-01 03:47:51.481642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:59.171 [2024-10-01 03:47:51.481647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:59.171 [2024-10-01 03:47:51.481654] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:59.171 [2024-10-01 03:47:51.481662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:59.171 [2024-10-01 03:47:51.481668] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:59.171 [2024-10-01 03:47:51.481674] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:59.171 [2024-10-01 03:47:51.481679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:59.171 [2024-10-01 03:47:51.481685] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:59.171 [2024-10-01 03:47:51.481691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:59.171 [2024-10-01 03:47:51.481696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:59.171 [2024-10-01 03:47:51.481702] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:59.171 [2024-10-01 03:47:51.481707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:59.171 [2024-10-01 03:47:51.481713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:59.171 [2024-10-01 03:47:51.481718] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:59.171 [2024-10-01 03:47:51.481724] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:59.171 [2024-10-01 03:47:51.481730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:59.171 [2024-10-01 03:47:51.481736] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:59.171 [2024-10-01 03:47:51.481741] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:59.171 [2024-10-01 03:47:51.481747] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:59.171 [2024-10-01 03:47:51.481754] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:59.171 [2024-10-01 03:47:51.481762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:59.171 [2024-10-01 03:47:51.481768] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:59.171 [2024-10-01 03:47:51.481773] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:59.171 [2024-10-01 03:47:51.481780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:59.171 [2024-10-01 03:47:51.481785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.171 [2024-10-01 03:47:51.481792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:59.171 [2024-10-01 03:47:51.481798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.495 ms 00:20:59.171 [2024-10-01 03:47:51.481805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.171 [2024-10-01 03:47:51.529760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.171 [2024-10-01 03:47:51.529810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:59.171 [2024-10-01 03:47:51.529824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.907 ms 00:20:59.171 [2024-10-01 03:47:51.529834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.171 [2024-10-01 03:47:51.529934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.171 [2024-10-01 03:47:51.529943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:59.171 [2024-10-01 03:47:51.529952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:59.171 [2024-10-01 03:47:51.529960] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.171 [2024-10-01 03:47:51.556536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.171 [2024-10-01 03:47:51.556568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:59.171 [2024-10-01 03:47:51.556577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.482 ms 00:20:59.171 [2024-10-01 03:47:51.556584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.171 [2024-10-01 03:47:51.556621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.171 [2024-10-01 03:47:51.556629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:59.171 [2024-10-01 03:47:51.556636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:59.171 [2024-10-01 03:47:51.556642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.171 [2024-10-01 03:47:51.557089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.171 [2024-10-01 03:47:51.557111] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:59.171 [2024-10-01 03:47:51.557119] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:20:59.171 [2024-10-01 03:47:51.557125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.171 [2024-10-01 03:47:51.557236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.171 [2024-10-01 03:47:51.557243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:59.171 [2024-10-01 03:47:51.557250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:20:59.171 [2024-10-01 03:47:51.557258] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.171 [2024-10-01 03:47:51.568550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.171 [2024-10-01 03:47:51.568576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:59.171 [2024-10-01 03:47:51.568585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.275 ms 00:20:59.171 [2024-10-01 03:47:51.568592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.171 [2024-10-01 03:47:51.579089] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:59.171 [2024-10-01 03:47:51.579245] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:59.171 [2024-10-01 03:47:51.579259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.171 [2024-10-01 03:47:51.579266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:59.171 [2024-10-01 03:47:51.579274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.583 ms 00:20:59.171 [2024-10-01 03:47:51.579279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.171 [2024-10-01 03:47:51.598098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.171 [2024-10-01 03:47:51.598215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:59.172 [2024-10-01 03:47:51.598234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.789 ms 00:20:59.172 [2024-10-01 03:47:51.598240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.607671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.172 [2024-10-01 03:47:51.607700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:59.172 [2024-10-01 03:47:51.607708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.405 ms 00:20:59.172 [2024-10-01 03:47:51.607715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.616443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.172 [2024-10-01 03:47:51.616470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:59.172 [2024-10-01 03:47:51.616478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.700 ms 00:20:59.172 [2024-10-01 03:47:51.616484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.616970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.172 [2024-10-01 03:47:51.616991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:59.172 [2024-10-01 03:47:51.616999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:20:59.172 [2024-10-01 03:47:51.617022] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.665132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.172 [2024-10-01 03:47:51.665305] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:59.172 [2024-10-01 03:47:51.665322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.093 ms 00:20:59.172 [2024-10-01 03:47:51.665329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.673920] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:59.172 [2024-10-01 03:47:51.676435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.172 [2024-10-01 03:47:51.676568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:59.172 [2024-10-01 03:47:51.676584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.866 ms 00:20:59.172 [2024-10-01 03:47:51.676592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.676687] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.172 [2024-10-01 03:47:51.676698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:59.172 [2024-10-01 03:47:51.676706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:20:59.172 [2024-10-01 03:47:51.676713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.676780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.172 [2024-10-01 03:47:51.676790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:59.172 [2024-10-01 03:47:51.676798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:59.172 [2024-10-01 03:47:51.676805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.676822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.172 [2024-10-01 03:47:51.676829] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:59.172 [2024-10-01 03:47:51.676836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:59.172 [2024-10-01 03:47:51.676842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.676875] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:59.172 [2024-10-01 03:47:51.676884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.172 [2024-10-01 03:47:51.676892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:59.172 [2024-10-01 03:47:51.676899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:59.172 [2024-10-01 03:47:51.676906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.695459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.172 [2024-10-01 03:47:51.695584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:59.172 [2024-10-01 03:47:51.695599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.539 ms 00:20:59.172 [2024-10-01 03:47:51.695606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.695670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:59.172 [2024-10-01 03:47:51.695679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:59.172 [2024-10-01 03:47:51.695687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:20:59.172 [2024-10-01 03:47:51.695694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.172 [2024-10-01 03:47:51.696702] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 237.647 ms, result 0 00:21:22.098  Copying: 48/1024 [MB] (48 MBps) Copying: 93/1024 [MB] (45 MBps) Copying: 136/1024 [MB] (43 MBps) Copying: 181/1024 [MB] (44 MBps) Copying: 225/1024 [MB] (44 MBps) Copying: 270/1024 [MB] (45 MBps) Copying: 320/1024 [MB] (49 MBps) Copying: 366/1024 [MB] (45 MBps) Copying: 418/1024 [MB] (51 MBps) Copying: 463/1024 [MB] (45 MBps) Copying: 512/1024 [MB] (49 MBps) Copying: 562/1024 [MB] (50 MBps) Copying: 607/1024 [MB] (45 MBps) Copying: 653/1024 [MB] (45 MBps) Copying: 701/1024 [MB] (48 MBps) Copying: 752/1024 [MB] (50 MBps) Copying: 806/1024 [MB] (54 MBps) Copying: 854/1024 [MB] (48 MBps) Copying: 903/1024 [MB] (49 MBps) Copying: 948/1024 [MB] (44 MBps) Copying: 993/1024 [MB] (45 MBps) Copying: 1023/1024 [MB] (29 MBps) Copying: 1024/1024 [MB] (average 44 MBps)[2024-10-01 03:48:14.479858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-10-01 03:48:14.479926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:22.098 [2024-10-01 03:48:14.479942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:22.098 [2024-10-01 03:48:14.479951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-10-01 03:48:14.482815] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:22.098 [2024-10-01 03:48:14.486602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-10-01 03:48:14.486638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:22.098 [2024-10-01 03:48:14.486651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.723 ms 00:21:22.098 [2024-10-01 03:48:14.486661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-10-01 03:48:14.498514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-10-01 03:48:14.498551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:22.098 [2024-10-01 03:48:14.498561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.841 ms 00:21:22.098 [2024-10-01 03:48:14.498570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-10-01 03:48:14.516994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-10-01 03:48:14.517042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:22.098 [2024-10-01 03:48:14.517053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.409 ms 00:21:22.098 [2024-10-01 03:48:14.517063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-10-01 03:48:14.523159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-10-01 03:48:14.523187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:22.098 [2024-10-01 03:48:14.523196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.068 ms 00:21:22.098 [2024-10-01 03:48:14.523204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-10-01 03:48:14.547700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-10-01 03:48:14.547735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:22.098 [2024-10-01 03:48:14.547746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.458 ms 00:21:22.098 [2024-10-01 03:48:14.547754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-10-01 03:48:14.562258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-10-01 03:48:14.562292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:22.098 [2024-10-01 03:48:14.562308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.471 ms 00:21:22.098 [2024-10-01 03:48:14.562316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-10-01 03:48:14.618000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-10-01 03:48:14.618059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:22.098 [2024-10-01 03:48:14.618071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 55.645 ms 00:21:22.098 [2024-10-01 03:48:14.618079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.098 [2024-10-01 03:48:14.642013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.098 [2024-10-01 03:48:14.642051] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:22.098 [2024-10-01 03:48:14.642062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.917 ms 00:21:22.098 [2024-10-01 03:48:14.642070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.359 [2024-10-01 03:48:14.664638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.359 [2024-10-01 03:48:14.664681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:22.359 [2024-10-01 03:48:14.664691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.535 ms 00:21:22.359 [2024-10-01 03:48:14.664698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.359 [2024-10-01 03:48:14.686550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.359 [2024-10-01 03:48:14.686584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:22.359 [2024-10-01 03:48:14.686600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.821 ms 00:21:22.359 [2024-10-01 03:48:14.686607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.359 [2024-10-01 03:48:14.708541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.359 [2024-10-01 03:48:14.708576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:22.359 [2024-10-01 03:48:14.708586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.876 ms 00:21:22.359 [2024-10-01 03:48:14.708594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.359 [2024-10-01 03:48:14.708625] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:22.359 [2024-10-01 03:48:14.708641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128768 / 261120 wr_cnt: 1 state: open 00:21:22.359 [2024-10-01 03:48:14.708651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:22.359 [2024-10-01 03:48:14.708838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.708996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:22.360 [2024-10-01 03:48:14.709453] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:22.360 [2024-10-01 03:48:14.709462] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cad42eb8-b470-467b-8b2f-e8c0d12f6694 00:21:22.360 [2024-10-01 03:48:14.709470] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128768 00:21:22.360 [2024-10-01 03:48:14.709477] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 129728 00:21:22.360 [2024-10-01 03:48:14.709485] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128768 00:21:22.360 [2024-10-01 03:48:14.709493] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0075 00:21:22.360 [2024-10-01 03:48:14.709500] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:22.360 [2024-10-01 03:48:14.709508] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:22.360 [2024-10-01 03:48:14.709524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:22.360 [2024-10-01 03:48:14.709531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:22.360 [2024-10-01 03:48:14.709538] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:22.360 [2024-10-01 03:48:14.709546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.360 [2024-10-01 03:48:14.709556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:22.360 [2024-10-01 03:48:14.709565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.922 ms 00:21:22.360 [2024-10-01 03:48:14.709573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.360 [2024-10-01 03:48:14.722149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.360 [2024-10-01 03:48:14.722180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:22.360 [2024-10-01 03:48:14.722191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.560 ms 00:21:22.361 [2024-10-01 03:48:14.722200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.722565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:22.361 [2024-10-01 03:48:14.722591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:22.361 [2024-10-01 03:48:14.722601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:21:22.361 [2024-10-01 03:48:14.722609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.752328] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.752368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:22.361 [2024-10-01 03:48:14.752379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.752391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.752455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.752464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:22.361 [2024-10-01 03:48:14.752472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.752480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.752541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.752552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:22.361 [2024-10-01 03:48:14.752561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.752574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.752592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.752600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:22.361 [2024-10-01 03:48:14.752608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.752616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.817850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.817902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:22.361 [2024-10-01 03:48:14.817913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.817920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.868450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.868507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:22.361 [2024-10-01 03:48:14.868517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.868524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.868601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.868610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:22.361 [2024-10-01 03:48:14.868617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.868624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.868655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.868668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:22.361 [2024-10-01 03:48:14.868674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.868681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.868763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.868772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:22.361 [2024-10-01 03:48:14.868779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.868786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.868811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.868820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:22.361 [2024-10-01 03:48:14.868829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.868836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.868870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.868877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:22.361 [2024-10-01 03:48:14.868884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.868891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.868931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:22.361 [2024-10-01 03:48:14.868942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:22.361 [2024-10-01 03:48:14.868949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:22.361 [2024-10-01 03:48:14.868955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:22.361 [2024-10-01 03:48:14.869073] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 392.285 ms, result 0 00:21:24.264 00:21:24.264 00:21:24.264 03:48:16 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:21:26.167 03:48:18 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:26.167 [2024-10-01 03:48:18.707702] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:26.167 [2024-10-01 03:48:18.707822] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77018 ] 00:21:26.425 [2024-10-01 03:48:18.850192] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:26.684 [2024-10-01 03:48:19.030272] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.944 [2024-10-01 03:48:19.261351] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:26.944 [2024-10-01 03:48:19.261412] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:26.944 [2024-10-01 03:48:19.415558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.944 [2024-10-01 03:48:19.415616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:26.944 [2024-10-01 03:48:19.415629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:26.944 [2024-10-01 03:48:19.415639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.944 [2024-10-01 03:48:19.415681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.944 [2024-10-01 03:48:19.415689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:26.944 [2024-10-01 03:48:19.415696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:26.944 [2024-10-01 03:48:19.415702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.944 [2024-10-01 03:48:19.415717] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:26.944 [2024-10-01 03:48:19.416267] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:26.944 [2024-10-01 03:48:19.416286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.944 [2024-10-01 03:48:19.416292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:26.944 [2024-10-01 03:48:19.416299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.573 ms 00:21:26.944 [2024-10-01 03:48:19.416306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.944 [2024-10-01 03:48:19.417596] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:26.944 [2024-10-01 03:48:19.427717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.944 [2024-10-01 03:48:19.427748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:26.944 [2024-10-01 03:48:19.427759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.123 ms 00:21:26.944 [2024-10-01 03:48:19.427767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.944 [2024-10-01 03:48:19.427815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.944 [2024-10-01 03:48:19.427827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:26.944 [2024-10-01 03:48:19.427835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:21:26.944 [2024-10-01 03:48:19.427841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.944 [2024-10-01 03:48:19.434148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.944 [2024-10-01 03:48:19.434177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:26.944 [2024-10-01 03:48:19.434186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.257 ms 00:21:26.944 [2024-10-01 03:48:19.434192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.944 [2024-10-01 03:48:19.434255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.944 [2024-10-01 03:48:19.434263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:26.944 [2024-10-01 03:48:19.434270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:26.944 [2024-10-01 03:48:19.434276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.944 [2024-10-01 03:48:19.434320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.944 [2024-10-01 03:48:19.434328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:26.944 [2024-10-01 03:48:19.434335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:26.944 [2024-10-01 03:48:19.434341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.944 [2024-10-01 03:48:19.434361] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:26.944 [2024-10-01 03:48:19.437412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.944 [2024-10-01 03:48:19.437439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:26.944 [2024-10-01 03:48:19.437447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.058 ms 00:21:26.944 [2024-10-01 03:48:19.437453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.944 [2024-10-01 03:48:19.437479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.944 [2024-10-01 03:48:19.437487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:26.944 [2024-10-01 03:48:19.437494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:26.944 [2024-10-01 03:48:19.437500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.944 [2024-10-01 03:48:19.437520] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:26.944 [2024-10-01 03:48:19.437538] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:26.944 [2024-10-01 03:48:19.437569] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:26.944 [2024-10-01 03:48:19.437582] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:26.944 [2024-10-01 03:48:19.437665] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:26.944 [2024-10-01 03:48:19.437676] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:26.944 [2024-10-01 03:48:19.437685] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:26.944 [2024-10-01 03:48:19.437696] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:26.944 [2024-10-01 03:48:19.437703] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:26.944 [2024-10-01 03:48:19.437711] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:26.944 [2024-10-01 03:48:19.437717] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:26.944 [2024-10-01 03:48:19.437724] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:26.944 [2024-10-01 03:48:19.437730] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:26.944 [2024-10-01 03:48:19.437736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.944 [2024-10-01 03:48:19.437743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:26.944 [2024-10-01 03:48:19.437749] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.219 ms 00:21:26.944 [2024-10-01 03:48:19.437755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.945 [2024-10-01 03:48:19.437820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.945 [2024-10-01 03:48:19.437830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:26.945 [2024-10-01 03:48:19.437836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:26.945 [2024-10-01 03:48:19.437843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.945 [2024-10-01 03:48:19.437926] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:26.945 [2024-10-01 03:48:19.437942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:26.945 [2024-10-01 03:48:19.437950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:26.945 [2024-10-01 03:48:19.437956] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.945 [2024-10-01 03:48:19.437963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:26.945 [2024-10-01 03:48:19.437969] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:26.945 [2024-10-01 03:48:19.437974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:26.945 [2024-10-01 03:48:19.437981] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:26.945 [2024-10-01 03:48:19.437986] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:26.945 [2024-10-01 03:48:19.437993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:26.945 [2024-10-01 03:48:19.437999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:26.945 [2024-10-01 03:48:19.438016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:26.945 [2024-10-01 03:48:19.438022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:26.945 [2024-10-01 03:48:19.438033] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:26.945 [2024-10-01 03:48:19.438039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:26.945 [2024-10-01 03:48:19.438046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.945 [2024-10-01 03:48:19.438052] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:26.945 [2024-10-01 03:48:19.438057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:26.945 [2024-10-01 03:48:19.438063] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.945 [2024-10-01 03:48:19.438069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:26.945 [2024-10-01 03:48:19.438074] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:26.945 [2024-10-01 03:48:19.438080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.945 [2024-10-01 03:48:19.438085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:26.945 [2024-10-01 03:48:19.438091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:26.945 [2024-10-01 03:48:19.438096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.945 [2024-10-01 03:48:19.438102] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:26.945 [2024-10-01 03:48:19.438107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:26.945 [2024-10-01 03:48:19.438113] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.945 [2024-10-01 03:48:19.438118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:26.945 [2024-10-01 03:48:19.438124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:26.945 [2024-10-01 03:48:19.438130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:26.945 [2024-10-01 03:48:19.438135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:26.945 [2024-10-01 03:48:19.438141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:26.945 [2024-10-01 03:48:19.438146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:26.945 [2024-10-01 03:48:19.438151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:26.945 [2024-10-01 03:48:19.438157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:26.945 [2024-10-01 03:48:19.438162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:26.945 [2024-10-01 03:48:19.438168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:26.945 [2024-10-01 03:48:19.438173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:26.945 [2024-10-01 03:48:19.438179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.945 [2024-10-01 03:48:19.438185] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:26.945 [2024-10-01 03:48:19.438190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:26.945 [2024-10-01 03:48:19.438195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.945 [2024-10-01 03:48:19.438200] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:26.945 [2024-10-01 03:48:19.438206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:26.945 [2024-10-01 03:48:19.438214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:26.945 [2024-10-01 03:48:19.438220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:26.945 [2024-10-01 03:48:19.438227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:26.945 [2024-10-01 03:48:19.438234] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:26.945 [2024-10-01 03:48:19.438240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:26.945 [2024-10-01 03:48:19.438245] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:26.945 [2024-10-01 03:48:19.438251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:26.945 [2024-10-01 03:48:19.438256] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:26.945 [2024-10-01 03:48:19.438263] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:26.945 [2024-10-01 03:48:19.438271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:26.945 [2024-10-01 03:48:19.438278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:26.945 [2024-10-01 03:48:19.438285] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:26.945 [2024-10-01 03:48:19.438291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:26.945 [2024-10-01 03:48:19.438297] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:26.945 [2024-10-01 03:48:19.438302] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:26.945 [2024-10-01 03:48:19.438307] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:26.945 [2024-10-01 03:48:19.438313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:26.945 [2024-10-01 03:48:19.438319] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:26.945 [2024-10-01 03:48:19.438324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:26.945 [2024-10-01 03:48:19.438330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:26.945 [2024-10-01 03:48:19.438335] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:26.945 [2024-10-01 03:48:19.438340] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:26.945 [2024-10-01 03:48:19.438346] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:26.945 [2024-10-01 03:48:19.438351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:26.945 [2024-10-01 03:48:19.438357] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:26.945 [2024-10-01 03:48:19.438364] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:26.945 [2024-10-01 03:48:19.438371] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:26.945 [2024-10-01 03:48:19.438376] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:26.945 [2024-10-01 03:48:19.438382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:26.945 [2024-10-01 03:48:19.438387] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:26.945 [2024-10-01 03:48:19.438393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.945 [2024-10-01 03:48:19.438399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:26.945 [2024-10-01 03:48:19.438405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.519 ms 00:21:26.945 [2024-10-01 03:48:19.438410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.946 [2024-10-01 03:48:19.474916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.946 [2024-10-01 03:48:19.474969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:26.946 [2024-10-01 03:48:19.474985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.453 ms 00:21:26.946 [2024-10-01 03:48:19.474996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:26.946 [2024-10-01 03:48:19.475141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:26.946 [2024-10-01 03:48:19.475154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:26.946 [2024-10-01 03:48:19.475166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:21:26.946 [2024-10-01 03:48:19.475175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.204 [2024-10-01 03:48:19.501782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.204 [2024-10-01 03:48:19.501815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:27.204 [2024-10-01 03:48:19.501827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.532 ms 00:21:27.204 [2024-10-01 03:48:19.501834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.204 [2024-10-01 03:48:19.501866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.204 [2024-10-01 03:48:19.501873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:27.205 [2024-10-01 03:48:19.501880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:21:27.205 [2024-10-01 03:48:19.501886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.502312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.502332] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:27.205 [2024-10-01 03:48:19.502341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:21:27.205 [2024-10-01 03:48:19.502351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.502460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.502474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:27.205 [2024-10-01 03:48:19.502481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:21:27.205 [2024-10-01 03:48:19.502488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.513510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.513536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:27.205 [2024-10-01 03:48:19.513545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.005 ms 00:21:27.205 [2024-10-01 03:48:19.513551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.523937] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:21:27.205 [2024-10-01 03:48:19.523968] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:27.205 [2024-10-01 03:48:19.523978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.523985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:27.205 [2024-10-01 03:48:19.523993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.325 ms 00:21:27.205 [2024-10-01 03:48:19.523999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.542848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.542879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:27.205 [2024-10-01 03:48:19.542889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.802 ms 00:21:27.205 [2024-10-01 03:48:19.542897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.552043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.552072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:27.205 [2024-10-01 03:48:19.552080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.107 ms 00:21:27.205 [2024-10-01 03:48:19.552086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.560557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.560586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:27.205 [2024-10-01 03:48:19.560595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.442 ms 00:21:27.205 [2024-10-01 03:48:19.560601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.561108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.561131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:27.205 [2024-10-01 03:48:19.561139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.447 ms 00:21:27.205 [2024-10-01 03:48:19.561145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.608897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.608953] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:27.205 [2024-10-01 03:48:19.608965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.736 ms 00:21:27.205 [2024-10-01 03:48:19.608972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.617092] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:27.205 [2024-10-01 03:48:19.619689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.619716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:27.205 [2024-10-01 03:48:19.619727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.659 ms 00:21:27.205 [2024-10-01 03:48:19.619737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.619819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.619830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:27.205 [2024-10-01 03:48:19.619838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:27.205 [2024-10-01 03:48:19.619845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.621363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.621393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:27.205 [2024-10-01 03:48:19.621402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.468 ms 00:21:27.205 [2024-10-01 03:48:19.621409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.621438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.621447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:27.205 [2024-10-01 03:48:19.621455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:27.205 [2024-10-01 03:48:19.621462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.621497] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:27.205 [2024-10-01 03:48:19.621506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.621513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:27.205 [2024-10-01 03:48:19.621525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:27.205 [2024-10-01 03:48:19.621532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.640009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.640041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:27.205 [2024-10-01 03:48:19.640050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.462 ms 00:21:27.205 [2024-10-01 03:48:19.640057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.640119] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:27.205 [2024-10-01 03:48:19.640128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:27.205 [2024-10-01 03:48:19.640135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:21:27.205 [2024-10-01 03:48:19.640141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:27.205 [2024-10-01 03:48:19.641310] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 225.341 ms, result 0 00:21:47.259  Copying: 1104/1048576 [kB] (1104 kBps) Copying: 15/1024 [MB] (14 MBps) Copying: 68/1024 [MB] (52 MBps) Copying: 125/1024 [MB] (57 MBps) Copying: 185/1024 [MB] (60 MBps) Copying: 238/1024 [MB] (52 MBps) Copying: 295/1024 [MB] (57 MBps) Copying: 351/1024 [MB] (55 MBps) Copying: 403/1024 [MB] (52 MBps) Copying: 456/1024 [MB] (53 MBps) Copying: 515/1024 [MB] (58 MBps) Copying: 572/1024 [MB] (57 MBps) Copying: 637/1024 [MB] (65 MBps) Copying: 696/1024 [MB] (59 MBps) Copying: 758/1024 [MB] (62 MBps) Copying: 812/1024 [MB] (53 MBps) Copying: 862/1024 [MB] (50 MBps) Copying: 915/1024 [MB] (52 MBps) Copying: 971/1024 [MB] (55 MBps) Copying: 1024/1024 [MB] (average 51 MBps)[2024-10-01 03:48:39.650396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.650465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:47.259 [2024-10-01 03:48:39.650479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:47.259 [2024-10-01 03:48:39.650486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.650511] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:47.259 [2024-10-01 03:48:39.652755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.652790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:47.259 [2024-10-01 03:48:39.652798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.230 ms 00:21:47.259 [2024-10-01 03:48:39.652805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.652984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.652993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:47.259 [2024-10-01 03:48:39.653011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.161 ms 00:21:47.259 [2024-10-01 03:48:39.653018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.660879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.660912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:47.259 [2024-10-01 03:48:39.660921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.847 ms 00:21:47.259 [2024-10-01 03:48:39.660932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.665606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.665630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:47.259 [2024-10-01 03:48:39.665638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.653 ms 00:21:47.259 [2024-10-01 03:48:39.665645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.684939] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.684967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:47.259 [2024-10-01 03:48:39.684977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.259 ms 00:21:47.259 [2024-10-01 03:48:39.684983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.696310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.696339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:47.259 [2024-10-01 03:48:39.696350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.291 ms 00:21:47.259 [2024-10-01 03:48:39.696358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.698190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.698217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:47.259 [2024-10-01 03:48:39.698225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.808 ms 00:21:47.259 [2024-10-01 03:48:39.698232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.716204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.716231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:47.259 [2024-10-01 03:48:39.716240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.960 ms 00:21:47.259 [2024-10-01 03:48:39.716246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.733812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.733839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:47.259 [2024-10-01 03:48:39.733848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.538 ms 00:21:47.259 [2024-10-01 03:48:39.733854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.751412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.751440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:47.259 [2024-10-01 03:48:39.751449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.530 ms 00:21:47.259 [2024-10-01 03:48:39.751456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.768806] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.259 [2024-10-01 03:48:39.768833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:47.259 [2024-10-01 03:48:39.768842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.284 ms 00:21:47.259 [2024-10-01 03:48:39.768848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.259 [2024-10-01 03:48:39.768874] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:47.259 [2024-10-01 03:48:39.768887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:21:47.259 [2024-10-01 03:48:39.768899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:21:47.259 [2024-10-01 03:48:39.768906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.768994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.769012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.769019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.769025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.769031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.769037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.769043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:47.259 [2024-10-01 03:48:39.769051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769172] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769196] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:47.260 [2024-10-01 03:48:39.769514] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:47.260 [2024-10-01 03:48:39.769521] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cad42eb8-b470-467b-8b2f-e8c0d12f6694 00:21:47.260 [2024-10-01 03:48:39.769527] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:21:47.260 [2024-10-01 03:48:39.769532] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 135872 00:21:47.260 [2024-10-01 03:48:39.769538] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 133888 00:21:47.260 [2024-10-01 03:48:39.769544] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0148 00:21:47.260 [2024-10-01 03:48:39.769550] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:47.260 [2024-10-01 03:48:39.769557] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:47.260 [2024-10-01 03:48:39.769563] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:47.260 [2024-10-01 03:48:39.769569] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:47.260 [2024-10-01 03:48:39.769574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:47.260 [2024-10-01 03:48:39.769580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.260 [2024-10-01 03:48:39.769586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:47.260 [2024-10-01 03:48:39.769598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.707 ms 00:21:47.260 [2024-10-01 03:48:39.769606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.260 [2024-10-01 03:48:39.779786] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.260 [2024-10-01 03:48:39.779812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:47.260 [2024-10-01 03:48:39.779822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.166 ms 00:21:47.261 [2024-10-01 03:48:39.779828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.261 [2024-10-01 03:48:39.780132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:47.261 [2024-10-01 03:48:39.780146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:47.261 [2024-10-01 03:48:39.780153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.289 ms 00:21:47.261 [2024-10-01 03:48:39.780159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.261 [2024-10-01 03:48:39.803367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.261 [2024-10-01 03:48:39.803399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:47.261 [2024-10-01 03:48:39.803408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.261 [2024-10-01 03:48:39.803415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.261 [2024-10-01 03:48:39.803471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.261 [2024-10-01 03:48:39.803481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:47.261 [2024-10-01 03:48:39.803487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.261 [2024-10-01 03:48:39.803494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.261 [2024-10-01 03:48:39.803552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.261 [2024-10-01 03:48:39.803560] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:47.261 [2024-10-01 03:48:39.803569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.261 [2024-10-01 03:48:39.803575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.261 [2024-10-01 03:48:39.803588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.261 [2024-10-01 03:48:39.803596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:47.261 [2024-10-01 03:48:39.803606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.261 [2024-10-01 03:48:39.803612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.519 [2024-10-01 03:48:39.866973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.519 [2024-10-01 03:48:39.867029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:47.519 [2024-10-01 03:48:39.867039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.519 [2024-10-01 03:48:39.867046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.519 [2024-10-01 03:48:39.918134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.519 [2024-10-01 03:48:39.918193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:47.519 [2024-10-01 03:48:39.918204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.519 [2024-10-01 03:48:39.918211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.519 [2024-10-01 03:48:39.918284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.519 [2024-10-01 03:48:39.918293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:47.519 [2024-10-01 03:48:39.918300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.519 [2024-10-01 03:48:39.918307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.519 [2024-10-01 03:48:39.918337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.519 [2024-10-01 03:48:39.918345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:47.519 [2024-10-01 03:48:39.918351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.519 [2024-10-01 03:48:39.918360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.519 [2024-10-01 03:48:39.918437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.519 [2024-10-01 03:48:39.918446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:47.519 [2024-10-01 03:48:39.918452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.519 [2024-10-01 03:48:39.918459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.519 [2024-10-01 03:48:39.918483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.520 [2024-10-01 03:48:39.918490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:47.520 [2024-10-01 03:48:39.918497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.520 [2024-10-01 03:48:39.918503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-10-01 03:48:39.918538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.520 [2024-10-01 03:48:39.918547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:47.520 [2024-10-01 03:48:39.918553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.520 [2024-10-01 03:48:39.918560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-10-01 03:48:39.918611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:47.520 [2024-10-01 03:48:39.918622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:47.520 [2024-10-01 03:48:39.918628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:47.520 [2024-10-01 03:48:39.918638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:47.520 [2024-10-01 03:48:39.918743] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 268.320 ms, result 0 00:21:48.086 00:21:48.086 00:21:48.344 03:48:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:50.248 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:50.248 03:48:42 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:21:50.506 [2024-10-01 03:48:42.799484] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:21:50.506 [2024-10-01 03:48:42.799598] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77272 ] 00:21:50.506 [2024-10-01 03:48:42.942060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:50.765 [2024-10-01 03:48:43.124937] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.025 [2024-10-01 03:48:43.357162] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:51.025 [2024-10-01 03:48:43.357226] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:51.025 [2024-10-01 03:48:43.510370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.510418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:51.025 [2024-10-01 03:48:43.510432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:51.025 [2024-10-01 03:48:43.510442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.510483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.510491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:51.025 [2024-10-01 03:48:43.510498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:21:51.025 [2024-10-01 03:48:43.510505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.510519] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:51.025 [2024-10-01 03:48:43.511068] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:51.025 [2024-10-01 03:48:43.511083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.511089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:51.025 [2024-10-01 03:48:43.511096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:21:51.025 [2024-10-01 03:48:43.511103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.512401] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:51.025 [2024-10-01 03:48:43.522681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.522713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:51.025 [2024-10-01 03:48:43.522725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.280 ms 00:21:51.025 [2024-10-01 03:48:43.522733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.522789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.522797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:51.025 [2024-10-01 03:48:43.522805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:51.025 [2024-10-01 03:48:43.522811] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.529196] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.529226] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:51.025 [2024-10-01 03:48:43.529235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.334 ms 00:21:51.025 [2024-10-01 03:48:43.529242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.529308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.529315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:51.025 [2024-10-01 03:48:43.529322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:21:51.025 [2024-10-01 03:48:43.529329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.529375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.529388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:51.025 [2024-10-01 03:48:43.529395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:51.025 [2024-10-01 03:48:43.529401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.529421] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:51.025 [2024-10-01 03:48:43.532440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.532466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:51.025 [2024-10-01 03:48:43.532474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.025 ms 00:21:51.025 [2024-10-01 03:48:43.532481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.532506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.532514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:51.025 [2024-10-01 03:48:43.532521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:21:51.025 [2024-10-01 03:48:43.532527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.532547] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:51.025 [2024-10-01 03:48:43.532565] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:51.025 [2024-10-01 03:48:43.532594] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:51.025 [2024-10-01 03:48:43.532608] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:51.025 [2024-10-01 03:48:43.532692] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:51.025 [2024-10-01 03:48:43.532705] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:51.025 [2024-10-01 03:48:43.532714] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:51.025 [2024-10-01 03:48:43.532726] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:51.025 [2024-10-01 03:48:43.532734] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:51.025 [2024-10-01 03:48:43.532741] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:51.025 [2024-10-01 03:48:43.532747] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:51.025 [2024-10-01 03:48:43.532753] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:51.025 [2024-10-01 03:48:43.532760] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:51.025 [2024-10-01 03:48:43.532766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.532774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:51.025 [2024-10-01 03:48:43.532781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.222 ms 00:21:51.025 [2024-10-01 03:48:43.532787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.532852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.025 [2024-10-01 03:48:43.532865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:51.025 [2024-10-01 03:48:43.532872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:21:51.025 [2024-10-01 03:48:43.532878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.025 [2024-10-01 03:48:43.532965] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:51.025 [2024-10-01 03:48:43.532978] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:51.025 [2024-10-01 03:48:43.532985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:51.025 [2024-10-01 03:48:43.532993] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:51.025 [2024-10-01 03:48:43.533010] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:51.025 [2024-10-01 03:48:43.533016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:51.025 [2024-10-01 03:48:43.533021] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:51.025 [2024-10-01 03:48:43.533028] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:51.025 [2024-10-01 03:48:43.533034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:51.025 [2024-10-01 03:48:43.533040] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:51.025 [2024-10-01 03:48:43.533046] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:51.025 [2024-10-01 03:48:43.533052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:51.025 [2024-10-01 03:48:43.533058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:51.025 [2024-10-01 03:48:43.533069] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:51.025 [2024-10-01 03:48:43.533075] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:51.025 [2024-10-01 03:48:43.533081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:51.025 [2024-10-01 03:48:43.533087] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:51.026 [2024-10-01 03:48:43.533093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:51.026 [2024-10-01 03:48:43.533098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:51.026 [2024-10-01 03:48:43.533105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:51.026 [2024-10-01 03:48:43.533111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:51.026 [2024-10-01 03:48:43.533117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:51.026 [2024-10-01 03:48:43.533122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:51.026 [2024-10-01 03:48:43.533128] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:51.026 [2024-10-01 03:48:43.533134] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:51.026 [2024-10-01 03:48:43.533140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:51.026 [2024-10-01 03:48:43.533145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:51.026 [2024-10-01 03:48:43.533151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:51.026 [2024-10-01 03:48:43.533156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:51.026 [2024-10-01 03:48:43.533162] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:51.026 [2024-10-01 03:48:43.533167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:51.026 [2024-10-01 03:48:43.533173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:51.026 [2024-10-01 03:48:43.533179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:51.026 [2024-10-01 03:48:43.533184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:51.026 [2024-10-01 03:48:43.533190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:51.026 [2024-10-01 03:48:43.533195] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:51.026 [2024-10-01 03:48:43.533200] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:51.026 [2024-10-01 03:48:43.533205] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:51.026 [2024-10-01 03:48:43.533212] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:51.026 [2024-10-01 03:48:43.533217] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:51.026 [2024-10-01 03:48:43.533222] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:51.026 [2024-10-01 03:48:43.533228] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:51.026 [2024-10-01 03:48:43.533233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:51.026 [2024-10-01 03:48:43.533238] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:51.026 [2024-10-01 03:48:43.533244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:51.026 [2024-10-01 03:48:43.533254] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:51.026 [2024-10-01 03:48:43.533260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:51.026 [2024-10-01 03:48:43.533266] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:51.026 [2024-10-01 03:48:43.533272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:51.026 [2024-10-01 03:48:43.533280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:51.026 [2024-10-01 03:48:43.533292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:51.026 [2024-10-01 03:48:43.533298] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:51.026 [2024-10-01 03:48:43.533303] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:51.026 [2024-10-01 03:48:43.533310] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:51.026 [2024-10-01 03:48:43.533318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:51.026 [2024-10-01 03:48:43.533324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:51.026 [2024-10-01 03:48:43.533330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:51.026 [2024-10-01 03:48:43.533336] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:51.026 [2024-10-01 03:48:43.533342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:51.026 [2024-10-01 03:48:43.533348] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:51.026 [2024-10-01 03:48:43.533354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:51.026 [2024-10-01 03:48:43.533360] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:51.026 [2024-10-01 03:48:43.533365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:51.026 [2024-10-01 03:48:43.533371] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:51.026 [2024-10-01 03:48:43.533376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:51.026 [2024-10-01 03:48:43.533382] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:51.026 [2024-10-01 03:48:43.533388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:51.026 [2024-10-01 03:48:43.533394] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:51.026 [2024-10-01 03:48:43.533400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:51.026 [2024-10-01 03:48:43.533406] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:51.026 [2024-10-01 03:48:43.533412] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:51.026 [2024-10-01 03:48:43.533419] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:51.026 [2024-10-01 03:48:43.533424] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:51.026 [2024-10-01 03:48:43.533430] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:51.026 [2024-10-01 03:48:43.533436] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:51.026 [2024-10-01 03:48:43.533442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.026 [2024-10-01 03:48:43.533449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:51.026 [2024-10-01 03:48:43.533455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.531 ms 00:21:51.026 [2024-10-01 03:48:43.533460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.026 [2024-10-01 03:48:43.571831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.026 [2024-10-01 03:48:43.571874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:51.026 [2024-10-01 03:48:43.571886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.319 ms 00:21:51.026 [2024-10-01 03:48:43.571894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.026 [2024-10-01 03:48:43.571981] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.026 [2024-10-01 03:48:43.571989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:51.026 [2024-10-01 03:48:43.571996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:21:51.026 [2024-10-01 03:48:43.572014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.598419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.598452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:51.286 [2024-10-01 03:48:43.598464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.347 ms 00:21:51.286 [2024-10-01 03:48:43.598471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.598507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.598515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:51.286 [2024-10-01 03:48:43.598521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:51.286 [2024-10-01 03:48:43.598528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.598955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.598970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:51.286 [2024-10-01 03:48:43.598978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:21:51.286 [2024-10-01 03:48:43.598989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.599115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.599129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:51.286 [2024-10-01 03:48:43.599136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:21:51.286 [2024-10-01 03:48:43.599142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.610282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.610309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:51.286 [2024-10-01 03:48:43.610319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.122 ms 00:21:51.286 [2024-10-01 03:48:43.610325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.620621] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:51.286 [2024-10-01 03:48:43.620652] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:51.286 [2024-10-01 03:48:43.620661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.620669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:51.286 [2024-10-01 03:48:43.620676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.228 ms 00:21:51.286 [2024-10-01 03:48:43.620683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.639368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.639401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:51.286 [2024-10-01 03:48:43.639411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.646 ms 00:21:51.286 [2024-10-01 03:48:43.639418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.648402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.648429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:51.286 [2024-10-01 03:48:43.648437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.950 ms 00:21:51.286 [2024-10-01 03:48:43.648444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.656843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.656869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:51.286 [2024-10-01 03:48:43.656877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.363 ms 00:21:51.286 [2024-10-01 03:48:43.656883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.657381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.657403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:51.286 [2024-10-01 03:48:43.657411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.434 ms 00:21:51.286 [2024-10-01 03:48:43.657417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.705251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.705309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:51.286 [2024-10-01 03:48:43.705320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.816 ms 00:21:51.286 [2024-10-01 03:48:43.705328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.714062] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:51.286 [2024-10-01 03:48:43.716749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.716777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:51.286 [2024-10-01 03:48:43.716788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.362 ms 00:21:51.286 [2024-10-01 03:48:43.716799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.716888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.716897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:51.286 [2024-10-01 03:48:43.716905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:51.286 [2024-10-01 03:48:43.716912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.717582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.717612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:51.286 [2024-10-01 03:48:43.717620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.620 ms 00:21:51.286 [2024-10-01 03:48:43.717627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.717654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.717662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:51.286 [2024-10-01 03:48:43.717670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:51.286 [2024-10-01 03:48:43.717676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.717709] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:51.286 [2024-10-01 03:48:43.717718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.717725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:51.286 [2024-10-01 03:48:43.717734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:51.286 [2024-10-01 03:48:43.717742] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.736318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.736350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:51.286 [2024-10-01 03:48:43.736360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.562 ms 00:21:51.286 [2024-10-01 03:48:43.736367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.736433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:51.286 [2024-10-01 03:48:43.736442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:51.286 [2024-10-01 03:48:43.736449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:51.286 [2024-10-01 03:48:43.736455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:51.286 [2024-10-01 03:48:43.737479] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 226.725 ms, result 0 00:22:12.968  Copying: 46/1024 [MB] (46 MBps) Copying: 91/1024 [MB] (45 MBps) Copying: 139/1024 [MB] (47 MBps) Copying: 185/1024 [MB] (46 MBps) Copying: 232/1024 [MB] (47 MBps) Copying: 282/1024 [MB] (49 MBps) Copying: 331/1024 [MB] (48 MBps) Copying: 381/1024 [MB] (50 MBps) Copying: 429/1024 [MB] (47 MBps) Copying: 473/1024 [MB] (44 MBps) Copying: 524/1024 [MB] (51 MBps) Copying: 568/1024 [MB] (44 MBps) Copying: 616/1024 [MB] (47 MBps) Copying: 666/1024 [MB] (49 MBps) Copying: 714/1024 [MB] (48 MBps) Copying: 757/1024 [MB] (43 MBps) Copying: 807/1024 [MB] (49 MBps) Copying: 853/1024 [MB] (46 MBps) Copying: 902/1024 [MB] (48 MBps) Copying: 949/1024 [MB] (46 MBps) Copying: 999/1024 [MB] (50 MBps) Copying: 1024/1024 [MB] (average 47 MBps)[2024-10-01 03:49:05.404948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.968 [2024-10-01 03:49:05.405040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:12.968 [2024-10-01 03:49:05.405057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:12.968 [2024-10-01 03:49:05.405072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.968 [2024-10-01 03:49:05.405096] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:12.968 [2024-10-01 03:49:05.407945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.968 [2024-10-01 03:49:05.407975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:12.968 [2024-10-01 03:49:05.407987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.832 ms 00:22:12.968 [2024-10-01 03:49:05.407996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.968 [2024-10-01 03:49:05.408234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.968 [2024-10-01 03:49:05.408252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:12.968 [2024-10-01 03:49:05.408261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.206 ms 00:22:12.968 [2024-10-01 03:49:05.408270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.968 [2024-10-01 03:49:05.411713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.968 [2024-10-01 03:49:05.411731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:12.968 [2024-10-01 03:49:05.411741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.426 ms 00:22:12.968 [2024-10-01 03:49:05.411750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.968 [2024-10-01 03:49:05.417882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.968 [2024-10-01 03:49:05.417904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:12.968 [2024-10-01 03:49:05.417914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.118 ms 00:22:12.968 [2024-10-01 03:49:05.417924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.968 [2024-10-01 03:49:05.446437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.968 [2024-10-01 03:49:05.446478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:12.968 [2024-10-01 03:49:05.446492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.455 ms 00:22:12.968 [2024-10-01 03:49:05.446500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.968 [2024-10-01 03:49:05.461562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.968 [2024-10-01 03:49:05.461605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:12.968 [2024-10-01 03:49:05.461617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.018 ms 00:22:12.968 [2024-10-01 03:49:05.461625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.968 [2024-10-01 03:49:05.463578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.968 [2024-10-01 03:49:05.463605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:12.968 [2024-10-01 03:49:05.463615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.927 ms 00:22:12.968 [2024-10-01 03:49:05.463623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.968 [2024-10-01 03:49:05.486892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.968 [2024-10-01 03:49:05.486924] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:12.968 [2024-10-01 03:49:05.486936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.254 ms 00:22:12.968 [2024-10-01 03:49:05.486944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:12.968 [2024-10-01 03:49:05.509750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:12.968 [2024-10-01 03:49:05.509780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:12.968 [2024-10-01 03:49:05.509791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.773 ms 00:22:12.968 [2024-10-01 03:49:05.509800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.228 [2024-10-01 03:49:05.532442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.228 [2024-10-01 03:49:05.532475] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:13.228 [2024-10-01 03:49:05.532485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.610 ms 00:22:13.228 [2024-10-01 03:49:05.532493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.228 [2024-10-01 03:49:05.554941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.228 [2024-10-01 03:49:05.554972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:13.228 [2024-10-01 03:49:05.554982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.390 ms 00:22:13.228 [2024-10-01 03:49:05.554990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.228 [2024-10-01 03:49:05.555034] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:13.228 [2024-10-01 03:49:05.555052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:13.228 [2024-10-01 03:49:05.555063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:22:13.228 [2024-10-01 03:49:05.555072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555194] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:13.228 [2024-10-01 03:49:05.555385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:13.229 [2024-10-01 03:49:05.555852] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:13.229 [2024-10-01 03:49:05.555860] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: cad42eb8-b470-467b-8b2f-e8c0d12f6694 00:22:13.229 [2024-10-01 03:49:05.555867] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:22:13.229 [2024-10-01 03:49:05.555875] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:22:13.229 [2024-10-01 03:49:05.555882] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:22:13.229 [2024-10-01 03:49:05.555890] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:22:13.229 [2024-10-01 03:49:05.555897] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:13.229 [2024-10-01 03:49:05.555909] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:13.229 [2024-10-01 03:49:05.555917] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:13.229 [2024-10-01 03:49:05.555924] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:13.229 [2024-10-01 03:49:05.555930] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:13.229 [2024-10-01 03:49:05.555938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.229 [2024-10-01 03:49:05.555954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:13.229 [2024-10-01 03:49:05.555964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.905 ms 00:22:13.229 [2024-10-01 03:49:05.555971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.229 [2024-10-01 03:49:05.568784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.229 [2024-10-01 03:49:05.568815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:13.229 [2024-10-01 03:49:05.568827] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.797 ms 00:22:13.229 [2024-10-01 03:49:05.568841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.229 [2024-10-01 03:49:05.569219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:13.229 [2024-10-01 03:49:05.569235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:13.229 [2024-10-01 03:49:05.569244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.355 ms 00:22:13.229 [2024-10-01 03:49:05.569252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.229 [2024-10-01 03:49:05.598836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.229 [2024-10-01 03:49:05.598877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:13.229 [2024-10-01 03:49:05.598889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.229 [2024-10-01 03:49:05.598897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.229 [2024-10-01 03:49:05.598964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.229 [2024-10-01 03:49:05.598972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:13.229 [2024-10-01 03:49:05.598980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.229 [2024-10-01 03:49:05.598988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.229 [2024-10-01 03:49:05.599085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.229 [2024-10-01 03:49:05.599097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:13.229 [2024-10-01 03:49:05.599105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.229 [2024-10-01 03:49:05.599115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.229 [2024-10-01 03:49:05.599131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.229 [2024-10-01 03:49:05.599140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:13.229 [2024-10-01 03:49:05.599148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.229 [2024-10-01 03:49:05.599155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.229 [2024-10-01 03:49:05.681127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.229 [2024-10-01 03:49:05.681186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:13.229 [2024-10-01 03:49:05.681205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.230 [2024-10-01 03:49:05.681214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.230 [2024-10-01 03:49:05.747917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.230 [2024-10-01 03:49:05.747980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:13.230 [2024-10-01 03:49:05.747992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.230 [2024-10-01 03:49:05.748014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.230 [2024-10-01 03:49:05.748074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.230 [2024-10-01 03:49:05.748084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:13.230 [2024-10-01 03:49:05.748093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.230 [2024-10-01 03:49:05.748100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.230 [2024-10-01 03:49:05.748163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.230 [2024-10-01 03:49:05.748172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:13.230 [2024-10-01 03:49:05.748181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.230 [2024-10-01 03:49:05.748188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.230 [2024-10-01 03:49:05.748278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.230 [2024-10-01 03:49:05.748289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:13.230 [2024-10-01 03:49:05.748297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.230 [2024-10-01 03:49:05.748305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.230 [2024-10-01 03:49:05.748337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.230 [2024-10-01 03:49:05.748347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:13.230 [2024-10-01 03:49:05.748355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.230 [2024-10-01 03:49:05.748362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.230 [2024-10-01 03:49:05.748400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.230 [2024-10-01 03:49:05.748408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:13.230 [2024-10-01 03:49:05.748416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.230 [2024-10-01 03:49:05.748424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.230 [2024-10-01 03:49:05.748471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:13.230 [2024-10-01 03:49:05.748481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:13.230 [2024-10-01 03:49:05.748490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:13.230 [2024-10-01 03:49:05.748497] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:13.230 [2024-10-01 03:49:05.748625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 343.646 ms, result 0 00:22:14.164 00:22:14.164 00:22:14.164 03:49:06 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:16.694 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:22:16.694 Process with pid 76116 is not found 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 76116 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@950 -- # '[' -z 76116 ']' 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # kill -0 76116 00:22:16.694 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (76116) - No such process 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@977 -- # echo 'Process with pid 76116 is not found' 00:22:16.694 03:49:08 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:22:16.952 Remove shared memory files 00:22:16.952 03:49:09 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:22:16.952 03:49:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:22:16.952 03:49:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:22:16.952 03:49:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:22:16.952 03:49:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:22:16.952 03:49:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:22:16.952 03:49:09 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:22:16.952 00:22:16.952 real 2m10.756s 00:22:16.952 user 2m26.678s 00:22:16.952 sys 0m21.829s 00:22:16.952 03:49:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:16.952 03:49:09 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:16.952 ************************************ 00:22:16.952 END TEST ftl_dirty_shutdown 00:22:16.952 ************************************ 00:22:16.952 03:49:09 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:16.952 03:49:09 ftl -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:22:16.952 03:49:09 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:16.952 03:49:09 ftl -- common/autotest_common.sh@10 -- # set +x 00:22:16.952 ************************************ 00:22:16.952 START TEST ftl_upgrade_shutdown 00:22:16.952 ************************************ 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:22:16.952 * Looking for test storage... 00:22:16.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lcov --version 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:22:16.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.952 --rc genhtml_branch_coverage=1 00:22:16.952 --rc genhtml_function_coverage=1 00:22:16.952 --rc genhtml_legend=1 00:22:16.952 --rc geninfo_all_blocks=1 00:22:16.952 --rc geninfo_unexecuted_blocks=1 00:22:16.952 00:22:16.952 ' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:22:16.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.952 --rc genhtml_branch_coverage=1 00:22:16.952 --rc genhtml_function_coverage=1 00:22:16.952 --rc genhtml_legend=1 00:22:16.952 --rc geninfo_all_blocks=1 00:22:16.952 --rc geninfo_unexecuted_blocks=1 00:22:16.952 00:22:16.952 ' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:22:16.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.952 --rc genhtml_branch_coverage=1 00:22:16.952 --rc genhtml_function_coverage=1 00:22:16.952 --rc genhtml_legend=1 00:22:16.952 --rc geninfo_all_blocks=1 00:22:16.952 --rc geninfo_unexecuted_blocks=1 00:22:16.952 00:22:16.952 ' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:22:16.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:16.952 --rc genhtml_branch_coverage=1 00:22:16.952 --rc genhtml_function_coverage=1 00:22:16.952 --rc genhtml_legend=1 00:22:16.952 --rc geninfo_all_blocks=1 00:22:16.952 --rc geninfo_unexecuted_blocks=1 00:22:16.952 00:22:16.952 ' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:22:16.952 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=77621 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 77621 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77621 ']' 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:16.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:16.953 03:49:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:22:17.211 [2024-10-01 03:49:09.567421] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:17.212 [2024-10-01 03:49:09.567534] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77621 ] 00:22:17.212 [2024-10-01 03:49:09.715874] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.470 [2024-10-01 03:49:09.934635] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:22:18.039 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:22:18.607 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:22:18.607 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:22:18.607 03:49:10 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:22:18.607 03:49:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=basen1 00:22:18.607 03:49:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:18.607 03:49:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:18.607 03:49:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:18.607 03:49:10 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:22:18.607 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:18.607 { 00:22:18.607 "name": "basen1", 00:22:18.607 "aliases": [ 00:22:18.607 "0c66182f-8c99-4b81-8937-2b5463683fe0" 00:22:18.607 ], 00:22:18.607 "product_name": "NVMe disk", 00:22:18.607 "block_size": 4096, 00:22:18.607 "num_blocks": 1310720, 00:22:18.607 "uuid": "0c66182f-8c99-4b81-8937-2b5463683fe0", 00:22:18.607 "numa_id": -1, 00:22:18.607 "assigned_rate_limits": { 00:22:18.607 "rw_ios_per_sec": 0, 00:22:18.607 "rw_mbytes_per_sec": 0, 00:22:18.607 "r_mbytes_per_sec": 0, 00:22:18.607 "w_mbytes_per_sec": 0 00:22:18.607 }, 00:22:18.607 "claimed": true, 00:22:18.607 "claim_type": "read_many_write_one", 00:22:18.607 "zoned": false, 00:22:18.607 "supported_io_types": { 00:22:18.607 "read": true, 00:22:18.607 "write": true, 00:22:18.608 "unmap": true, 00:22:18.608 "flush": true, 00:22:18.608 "reset": true, 00:22:18.608 "nvme_admin": true, 00:22:18.608 "nvme_io": true, 00:22:18.608 "nvme_io_md": false, 00:22:18.608 "write_zeroes": true, 00:22:18.608 "zcopy": false, 00:22:18.608 "get_zone_info": false, 00:22:18.608 "zone_management": false, 00:22:18.608 "zone_append": false, 00:22:18.608 "compare": true, 00:22:18.608 "compare_and_write": false, 00:22:18.608 "abort": true, 00:22:18.608 "seek_hole": false, 00:22:18.608 "seek_data": false, 00:22:18.608 "copy": true, 00:22:18.608 "nvme_iov_md": false 00:22:18.608 }, 00:22:18.608 "driver_specific": { 00:22:18.608 "nvme": [ 00:22:18.608 { 00:22:18.608 "pci_address": "0000:00:11.0", 00:22:18.608 "trid": { 00:22:18.608 "trtype": "PCIe", 00:22:18.608 "traddr": "0000:00:11.0" 00:22:18.608 }, 00:22:18.608 "ctrlr_data": { 00:22:18.608 "cntlid": 0, 00:22:18.608 "vendor_id": "0x1b36", 00:22:18.608 "model_number": "QEMU NVMe Ctrl", 00:22:18.608 "serial_number": "12341", 00:22:18.608 "firmware_revision": "8.0.0", 00:22:18.608 "subnqn": "nqn.2019-08.org.qemu:12341", 00:22:18.608 "oacs": { 00:22:18.608 "security": 0, 00:22:18.608 "format": 1, 00:22:18.608 "firmware": 0, 00:22:18.608 "ns_manage": 1 00:22:18.608 }, 00:22:18.608 "multi_ctrlr": false, 00:22:18.608 "ana_reporting": false 00:22:18.608 }, 00:22:18.608 "vs": { 00:22:18.608 "nvme_version": "1.4" 00:22:18.608 }, 00:22:18.608 "ns_data": { 00:22:18.608 "id": 1, 00:22:18.608 "can_share": false 00:22:18.608 } 00:22:18.608 } 00:22:18.608 ], 00:22:18.608 "mp_policy": "active_passive" 00:22:18.608 } 00:22:18.608 } 00:22:18.608 ]' 00:22:18.608 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:18.608 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:18.608 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:18.608 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=1310720 00:22:18.608 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:22:18.608 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 5120 00:22:18.608 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:22:18.608 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:22:18.608 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:22:18.608 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:18.608 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:22:18.867 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=719a4a96-9d83-4c2d-bba8-0ecf361227c5 00:22:18.867 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:22:18.867 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 719a4a96-9d83-4c2d-bba8-0ecf361227c5 00:22:19.125 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:22:19.385 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=cd48bc65-5440-47ef-9f09-f7cc53b317a7 00:22:19.385 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u cd48bc65-5440-47ef-9f09-f7cc53b317a7 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=ce6548a5-0190-4408-b809-0004db44901b 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z ce6548a5-0190-4408-b809-0004db44901b ]] 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 ce6548a5-0190-4408-b809-0004db44901b 5120 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=ce6548a5-0190-4408-b809-0004db44901b 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size ce6548a5-0190-4408-b809-0004db44901b 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1378 -- # local bdev_name=ce6548a5-0190-4408-b809-0004db44901b 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1379 -- # local bdev_info 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bs 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local nb 00:22:19.644 03:49:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ce6548a5-0190-4408-b809-0004db44901b 00:22:19.644 03:49:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:22:19.644 { 00:22:19.644 "name": "ce6548a5-0190-4408-b809-0004db44901b", 00:22:19.644 "aliases": [ 00:22:19.644 "lvs/basen1p0" 00:22:19.644 ], 00:22:19.644 "product_name": "Logical Volume", 00:22:19.644 "block_size": 4096, 00:22:19.644 "num_blocks": 5242880, 00:22:19.644 "uuid": "ce6548a5-0190-4408-b809-0004db44901b", 00:22:19.644 "assigned_rate_limits": { 00:22:19.644 "rw_ios_per_sec": 0, 00:22:19.644 "rw_mbytes_per_sec": 0, 00:22:19.644 "r_mbytes_per_sec": 0, 00:22:19.644 "w_mbytes_per_sec": 0 00:22:19.644 }, 00:22:19.644 "claimed": false, 00:22:19.644 "zoned": false, 00:22:19.644 "supported_io_types": { 00:22:19.644 "read": true, 00:22:19.644 "write": true, 00:22:19.644 "unmap": true, 00:22:19.644 "flush": false, 00:22:19.644 "reset": true, 00:22:19.644 "nvme_admin": false, 00:22:19.644 "nvme_io": false, 00:22:19.644 "nvme_io_md": false, 00:22:19.644 "write_zeroes": true, 00:22:19.644 "zcopy": false, 00:22:19.644 "get_zone_info": false, 00:22:19.644 "zone_management": false, 00:22:19.644 "zone_append": false, 00:22:19.644 "compare": false, 00:22:19.644 "compare_and_write": false, 00:22:19.644 "abort": false, 00:22:19.644 "seek_hole": true, 00:22:19.644 "seek_data": true, 00:22:19.644 "copy": false, 00:22:19.644 "nvme_iov_md": false 00:22:19.644 }, 00:22:19.644 "driver_specific": { 00:22:19.644 "lvol": { 00:22:19.644 "lvol_store_uuid": "cd48bc65-5440-47ef-9f09-f7cc53b317a7", 00:22:19.644 "base_bdev": "basen1", 00:22:19.644 "thin_provision": true, 00:22:19.644 "num_allocated_clusters": 0, 00:22:19.644 "snapshot": false, 00:22:19.644 "clone": false, 00:22:19.644 "esnap_clone": false 00:22:19.644 } 00:22:19.644 } 00:22:19.644 } 00:22:19.644 ]' 00:22:19.644 03:49:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:22:19.903 03:49:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # bs=4096 00:22:19.903 03:49:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:22:19.903 03:49:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # nb=5242880 00:22:19.903 03:49:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bdev_size=20480 00:22:19.903 03:49:12 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # echo 20480 00:22:19.903 03:49:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:22:19.903 03:49:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:22:19.903 03:49:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:22:20.162 03:49:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:22:20.162 03:49:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:22:20.162 03:49:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:22:20.162 03:49:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:22:20.162 03:49:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:22:20.162 03:49:12 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d ce6548a5-0190-4408-b809-0004db44901b -c cachen1p0 --l2p_dram_limit 2 00:22:20.423 [2024-10-01 03:49:12.874453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.423 [2024-10-01 03:49:12.874503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:22:20.423 [2024-10-01 03:49:12.874516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:20.424 [2024-10-01 03:49:12.874523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.424 [2024-10-01 03:49:12.874562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.424 [2024-10-01 03:49:12.874584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:20.424 [2024-10-01 03:49:12.874594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:22:20.424 [2024-10-01 03:49:12.874602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.424 [2024-10-01 03:49:12.874621] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:22:20.424 [2024-10-01 03:49:12.875142] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:22:20.424 [2024-10-01 03:49:12.875172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.424 [2024-10-01 03:49:12.875179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:20.424 [2024-10-01 03:49:12.875188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.553 ms 00:22:20.424 [2024-10-01 03:49:12.875198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.424 [2024-10-01 03:49:12.875225] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID d93d0fd4-40b3-4195-8069-69c969fdae9f 00:22:20.424 [2024-10-01 03:49:12.876462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.424 [2024-10-01 03:49:12.876496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:22:20.424 [2024-10-01 03:49:12.876505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:22:20.424 [2024-10-01 03:49:12.876514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.424 [2024-10-01 03:49:12.883194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.424 [2024-10-01 03:49:12.883222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:20.424 [2024-10-01 03:49:12.883231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.621 ms 00:22:20.424 [2024-10-01 03:49:12.883239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.424 [2024-10-01 03:49:12.883272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.424 [2024-10-01 03:49:12.883281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:20.424 [2024-10-01 03:49:12.883289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:22:20.424 [2024-10-01 03:49:12.883304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.424 [2024-10-01 03:49:12.883341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.424 [2024-10-01 03:49:12.883352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:22:20.424 [2024-10-01 03:49:12.883359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:20.424 [2024-10-01 03:49:12.883367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.424 [2024-10-01 03:49:12.883383] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:22:20.424 [2024-10-01 03:49:12.886628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.424 [2024-10-01 03:49:12.886654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:20.424 [2024-10-01 03:49:12.886663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.248 ms 00:22:20.424 [2024-10-01 03:49:12.886670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.424 [2024-10-01 03:49:12.886693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.424 [2024-10-01 03:49:12.886700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:22:20.424 [2024-10-01 03:49:12.886708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:20.424 [2024-10-01 03:49:12.886716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.424 [2024-10-01 03:49:12.886731] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:22:20.424 [2024-10-01 03:49:12.886842] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:22:20.424 [2024-10-01 03:49:12.886856] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:22:20.424 [2024-10-01 03:49:12.886866] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:22:20.424 [2024-10-01 03:49:12.886878] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:22:20.424 [2024-10-01 03:49:12.886886] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:22:20.424 [2024-10-01 03:49:12.886894] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:22:20.424 [2024-10-01 03:49:12.886900] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:22:20.424 [2024-10-01 03:49:12.886907] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:22:20.424 [2024-10-01 03:49:12.886914] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:22:20.424 [2024-10-01 03:49:12.886923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.424 [2024-10-01 03:49:12.886928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:22:20.424 [2024-10-01 03:49:12.886936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.193 ms 00:22:20.424 [2024-10-01 03:49:12.886942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.424 [2024-10-01 03:49:12.887020] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.424 [2024-10-01 03:49:12.887037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:22:20.424 [2024-10-01 03:49:12.887045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.064 ms 00:22:20.424 [2024-10-01 03:49:12.887050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.424 [2024-10-01 03:49:12.887127] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:22:20.424 [2024-10-01 03:49:12.887136] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:22:20.424 [2024-10-01 03:49:12.887144] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:20.424 [2024-10-01 03:49:12.887150] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.424 [2024-10-01 03:49:12.887158] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:22:20.424 [2024-10-01 03:49:12.887163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:22:20.424 [2024-10-01 03:49:12.887170] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:22:20.424 [2024-10-01 03:49:12.887176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:22:20.424 [2024-10-01 03:49:12.887183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:22:20.424 [2024-10-01 03:49:12.887188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.424 [2024-10-01 03:49:12.887195] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:22:20.424 [2024-10-01 03:49:12.887200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:22:20.424 [2024-10-01 03:49:12.887206] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.424 [2024-10-01 03:49:12.887212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:22:20.424 [2024-10-01 03:49:12.887218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:22:20.424 [2024-10-01 03:49:12.887224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.424 [2024-10-01 03:49:12.887232] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:22:20.424 [2024-10-01 03:49:12.887237] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:22:20.424 [2024-10-01 03:49:12.887244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.424 [2024-10-01 03:49:12.887250] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:22:20.424 [2024-10-01 03:49:12.887258] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:22:20.424 [2024-10-01 03:49:12.887264] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:20.424 [2024-10-01 03:49:12.887271] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:22:20.424 [2024-10-01 03:49:12.887277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:22:20.424 [2024-10-01 03:49:12.887284] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:20.424 [2024-10-01 03:49:12.887291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:22:20.424 [2024-10-01 03:49:12.887297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:22:20.424 [2024-10-01 03:49:12.887302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:20.424 [2024-10-01 03:49:12.887309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:22:20.424 [2024-10-01 03:49:12.887315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:22:20.424 [2024-10-01 03:49:12.887322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:22:20.424 [2024-10-01 03:49:12.887327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:22:20.424 [2024-10-01 03:49:12.887335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:22:20.424 [2024-10-01 03:49:12.887340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.424 [2024-10-01 03:49:12.887347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:22:20.424 [2024-10-01 03:49:12.887352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:22:20.424 [2024-10-01 03:49:12.887359] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.424 [2024-10-01 03:49:12.887364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:22:20.424 [2024-10-01 03:49:12.887372] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:22:20.425 [2024-10-01 03:49:12.887377] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.425 [2024-10-01 03:49:12.887383] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:22:20.425 [2024-10-01 03:49:12.887389] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:22:20.425 [2024-10-01 03:49:12.887395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.425 [2024-10-01 03:49:12.887399] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:22:20.425 [2024-10-01 03:49:12.887407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:22:20.425 [2024-10-01 03:49:12.887414] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:22:20.425 [2024-10-01 03:49:12.887421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:22:20.425 [2024-10-01 03:49:12.887427] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:22:20.425 [2024-10-01 03:49:12.887438] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:22:20.425 [2024-10-01 03:49:12.887442] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:22:20.425 [2024-10-01 03:49:12.887450] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:22:20.425 [2024-10-01 03:49:12.887455] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:22:20.425 [2024-10-01 03:49:12.887463] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:22:20.425 [2024-10-01 03:49:12.887472] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:22:20.425 [2024-10-01 03:49:12.887481] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:20.425 [2024-10-01 03:49:12.887488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:22:20.425 [2024-10-01 03:49:12.887497] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:22:20.425 [2024-10-01 03:49:12.887503] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:22:20.425 [2024-10-01 03:49:12.887510] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:22:20.425 [2024-10-01 03:49:12.887516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:22:20.425 [2024-10-01 03:49:12.887523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:22:20.425 [2024-10-01 03:49:12.887529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:22:20.425 [2024-10-01 03:49:12.887536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:22:20.425 [2024-10-01 03:49:12.887542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:22:20.425 [2024-10-01 03:49:12.887551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:22:20.425 [2024-10-01 03:49:12.887556] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:22:20.425 [2024-10-01 03:49:12.887563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:22:20.425 [2024-10-01 03:49:12.887569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:22:20.425 [2024-10-01 03:49:12.887578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:22:20.425 [2024-10-01 03:49:12.887584] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:22:20.425 [2024-10-01 03:49:12.887592] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:20.425 [2024-10-01 03:49:12.887598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:20.425 [2024-10-01 03:49:12.887606] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:22:20.425 [2024-10-01 03:49:12.887612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:22:20.425 [2024-10-01 03:49:12.887620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:22:20.425 [2024-10-01 03:49:12.887626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:20.425 [2024-10-01 03:49:12.887633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:22:20.425 [2024-10-01 03:49:12.887639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.554 ms 00:22:20.425 [2024-10-01 03:49:12.887646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:20.425 [2024-10-01 03:49:12.887690] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:22:20.425 [2024-10-01 03:49:12.887705] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:22:22.956 [2024-10-01 03:49:15.167211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.167279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:22:22.956 [2024-10-01 03:49:15.167292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2279.510 ms 00:22:22.956 [2024-10-01 03:49:15.167301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.191053] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.191105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:22.956 [2024-10-01 03:49:15.191118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.584 ms 00:22:22.956 [2024-10-01 03:49:15.191126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.191202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.191213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:22:22.956 [2024-10-01 03:49:15.191220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:22:22.956 [2024-10-01 03:49:15.191235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.229852] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.229937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:22.956 [2024-10-01 03:49:15.229967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.583 ms 00:22:22.956 [2024-10-01 03:49:15.229990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.230081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.230105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:22.956 [2024-10-01 03:49:15.230123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:22:22.956 [2024-10-01 03:49:15.230141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.230747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.230795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:22.956 [2024-10-01 03:49:15.230826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.500 ms 00:22:22.956 [2024-10-01 03:49:15.230848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.230925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.230946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:22.956 [2024-10-01 03:49:15.230963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.038 ms 00:22:22.956 [2024-10-01 03:49:15.230985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.247902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.247935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:22.956 [2024-10-01 03:49:15.247944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.863 ms 00:22:22.956 [2024-10-01 03:49:15.247951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.257812] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:22:22.956 [2024-10-01 03:49:15.258738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.258761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:22:22.956 [2024-10-01 03:49:15.258773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.693 ms 00:22:22.956 [2024-10-01 03:49:15.258779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.278240] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.278272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:22:22.956 [2024-10-01 03:49:15.278286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.438 ms 00:22:22.956 [2024-10-01 03:49:15.278292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.278365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.278373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:22:22.956 [2024-10-01 03:49:15.278384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:22:22.956 [2024-10-01 03:49:15.278390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.295325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.295355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:22:22.956 [2024-10-01 03:49:15.295366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.909 ms 00:22:22.956 [2024-10-01 03:49:15.295372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.312405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.312593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:22:22.956 [2024-10-01 03:49:15.312611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.001 ms 00:22:22.956 [2024-10-01 03:49:15.312617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.313093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.313104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:22:22.956 [2024-10-01 03:49:15.313113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.447 ms 00:22:22.956 [2024-10-01 03:49:15.313120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.388140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.388186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:22:22.956 [2024-10-01 03:49:15.388205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 74.987 ms 00:22:22.956 [2024-10-01 03:49:15.388216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.413346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.413385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:22:22.956 [2024-10-01 03:49:15.413400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.042 ms 00:22:22.956 [2024-10-01 03:49:15.413409] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.436217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.436250] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:22:22.956 [2024-10-01 03:49:15.436263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 22.768 ms 00:22:22.956 [2024-10-01 03:49:15.436270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.459483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.459513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:22:22.956 [2024-10-01 03:49:15.459526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 23.174 ms 00:22:22.956 [2024-10-01 03:49:15.459534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.459575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.459585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:22:22.956 [2024-10-01 03:49:15.459601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:22:22.956 [2024-10-01 03:49:15.459609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.459707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:22.956 [2024-10-01 03:49:15.459717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:22:22.956 [2024-10-01 03:49:15.459727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:22:22.956 [2024-10-01 03:49:15.459736] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:22.956 [2024-10-01 03:49:15.460729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2585.814 ms, result 0 00:22:22.956 { 00:22:22.956 "name": "ftl", 00:22:22.956 "uuid": "d93d0fd4-40b3-4195-8069-69c969fdae9f" 00:22:22.956 } 00:22:22.956 03:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:22:23.215 [2024-10-01 03:49:15.664015] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.215 03:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:22:23.473 03:49:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:22:23.731 [2024-10-01 03:49:16.056413] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:22:23.731 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:22:23.731 [2024-10-01 03:49:16.265205] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:22:23.989 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:22:24.247 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:22:24.247 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:22:24.247 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:22:24.247 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:22:24.247 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:22:24.247 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:22:24.247 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:22:24.247 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:22:24.247 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:24.248 Fill FTL, iteration 1 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=77736 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 77736 /var/tmp/spdk.tgt.sock 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 77736 ']' 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:24.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:24.248 03:49:16 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:24.248 [2024-10-01 03:49:16.694746] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:24.248 [2024-10-01 03:49:16.694863] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77736 ] 00:22:24.506 [2024-10-01 03:49:16.843914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.506 [2024-10-01 03:49:17.026544] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.073 03:49:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:25.073 03:49:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:22:25.073 03:49:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:22:25.332 ftln1 00:22:25.332 03:49:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:22:25.332 03:49:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:22:25.590 03:49:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:22:25.590 03:49:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 77736 00:22:25.590 03:49:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77736 ']' 00:22:25.590 03:49:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77736 00:22:25.591 03:49:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:25.591 03:49:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:25.591 03:49:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77736 00:22:25.591 killing process with pid 77736 00:22:25.591 03:49:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:25.591 03:49:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:25.591 03:49:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77736' 00:22:25.591 03:49:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77736 00:22:25.591 03:49:18 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77736 00:22:27.490 03:49:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:22:27.490 03:49:19 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:22:27.490 [2024-10-01 03:49:19.640847] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:27.490 [2024-10-01 03:49:19.640963] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77779 ] 00:22:27.490 [2024-10-01 03:49:19.787526] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:27.490 [2024-10-01 03:49:19.937477] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:32.283  Copying: 279/1024 [MB] (279 MBps) Copying: 547/1024 [MB] (268 MBps) Copying: 811/1024 [MB] (264 MBps) Copying: 1024/1024 [MB] (average 271 MBps) 00:22:32.283 00:22:32.283 03:49:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:22:32.283 Calculate MD5 checksum, iteration 1 00:22:32.283 03:49:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:22:32.283 03:49:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:32.283 03:49:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:32.283 03:49:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:32.283 03:49:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:32.283 03:49:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:32.283 03:49:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:22:32.283 [2024-10-01 03:49:24.751672] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:32.283 [2024-10-01 03:49:24.751940] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77836 ] 00:22:32.541 [2024-10-01 03:49:24.900633] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.541 [2024-10-01 03:49:25.052875] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.044  Copying: 654/1024 [MB] (654 MBps) Copying: 1024/1024 [MB] (average 650 MBps) 00:22:35.045 00:22:35.045 03:49:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:22:35.045 03:49:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:36.950 03:49:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:36.950 Fill FTL, iteration 2 00:22:36.950 03:49:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=415ac4e05d7e8a6b5ebc9517913f0a82 00:22:36.950 03:49:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:36.950 03:49:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:36.950 03:49:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:22:36.950 03:49:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:36.950 03:49:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:36.950 03:49:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:36.950 03:49:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:36.950 03:49:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:36.950 03:49:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:22:37.212 [2024-10-01 03:49:29.538630] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:37.212 [2024-10-01 03:49:29.538745] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77890 ] 00:22:37.212 [2024-10-01 03:49:29.687490] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.472 [2024-10-01 03:49:29.860719] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:42.853  Copying: 182/1024 [MB] (182 MBps) Copying: 421/1024 [MB] (239 MBps) Copying: 683/1024 [MB] (262 MBps) Copying: 965/1024 [MB] (282 MBps) Copying: 1024/1024 [MB] (average 241 MBps) 00:22:42.853 00:22:42.853 Calculate MD5 checksum, iteration 2 00:22:42.853 03:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:22:42.853 03:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:22:42.853 03:49:35 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:42.853 03:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:22:42.853 03:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:22:42.853 03:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:22:42.853 03:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:22:42.853 03:49:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:22:42.853 [2024-10-01 03:49:35.155136] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:22:42.853 [2024-10-01 03:49:35.155249] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77954 ] 00:22:42.853 [2024-10-01 03:49:35.304681] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.111 [2024-10-01 03:49:35.447830] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.072  Copying: 665/1024 [MB] (665 MBps) Copying: 1024/1024 [MB] (average 658 MBps) 00:22:46.072 00:22:46.072 03:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:22:46.072 03:49:38 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:22:47.989 03:49:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:22:47.989 03:49:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=ba5547434569a14f0f33de280c41ec6d 00:22:47.989 03:49:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:22:47.989 03:49:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:22:47.989 03:49:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:48.250 [2024-10-01 03:49:40.692200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.250 [2024-10-01 03:49:40.692259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:48.250 [2024-10-01 03:49:40.692273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:22:48.250 [2024-10-01 03:49:40.692284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.250 [2024-10-01 03:49:40.692303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.250 [2024-10-01 03:49:40.692311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:48.250 [2024-10-01 03:49:40.692318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:22:48.250 [2024-10-01 03:49:40.692325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.250 [2024-10-01 03:49:40.692341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.250 [2024-10-01 03:49:40.692348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:48.250 [2024-10-01 03:49:40.692354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:48.250 [2024-10-01 03:49:40.692362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.250 [2024-10-01 03:49:40.692417] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.211 ms, result 0 00:22:48.250 true 00:22:48.250 03:49:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:48.511 { 00:22:48.511 "name": "ftl", 00:22:48.511 "properties": [ 00:22:48.511 { 00:22:48.511 "name": "superblock_version", 00:22:48.511 "value": 5, 00:22:48.511 "read-only": true 00:22:48.511 }, 00:22:48.511 { 00:22:48.511 "name": "base_device", 00:22:48.511 "bands": [ 00:22:48.512 { 00:22:48.512 "id": 0, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 1, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 2, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 3, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 4, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 5, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 6, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 7, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 8, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 9, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 10, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 11, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 12, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 13, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 14, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 15, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 16, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 17, 00:22:48.512 "state": "FREE", 00:22:48.512 "validity": 0.0 00:22:48.512 } 00:22:48.512 ], 00:22:48.512 "read-only": true 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "name": "cache_device", 00:22:48.512 "type": "bdev", 00:22:48.512 "chunks": [ 00:22:48.512 { 00:22:48.512 "id": 0, 00:22:48.512 "state": "INACTIVE", 00:22:48.512 "utilization": 0.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 1, 00:22:48.512 "state": "CLOSED", 00:22:48.512 "utilization": 1.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 2, 00:22:48.512 "state": "CLOSED", 00:22:48.512 "utilization": 1.0 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 3, 00:22:48.512 "state": "OPEN", 00:22:48.512 "utilization": 0.001953125 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "id": 4, 00:22:48.512 "state": "OPEN", 00:22:48.512 "utilization": 0.0 00:22:48.512 } 00:22:48.512 ], 00:22:48.512 "read-only": true 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "name": "verbose_mode", 00:22:48.512 "value": true, 00:22:48.512 "unit": "", 00:22:48.512 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:48.512 }, 00:22:48.512 { 00:22:48.512 "name": "prep_upgrade_on_shutdown", 00:22:48.512 "value": false, 00:22:48.512 "unit": "", 00:22:48.512 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:48.512 } 00:22:48.512 ] 00:22:48.512 } 00:22:48.512 03:49:40 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:22:48.774 [2024-10-01 03:49:41.088497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.774 [2024-10-01 03:49:41.088533] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:48.774 [2024-10-01 03:49:41.088542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:22:48.774 [2024-10-01 03:49:41.088549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.774 [2024-10-01 03:49:41.088565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.774 [2024-10-01 03:49:41.088572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:48.774 [2024-10-01 03:49:41.088580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:48.774 [2024-10-01 03:49:41.088586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.774 [2024-10-01 03:49:41.088601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:48.774 [2024-10-01 03:49:41.088608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:48.774 [2024-10-01 03:49:41.088614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:48.774 [2024-10-01 03:49:41.088619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:48.774 [2024-10-01 03:49:41.088662] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.157 ms, result 0 00:22:48.774 true 00:22:48.774 03:49:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:22:48.774 03:49:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:48.774 03:49:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:22:48.774 03:49:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:22:48.774 03:49:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:22:48.774 03:49:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:22:49.035 [2024-10-01 03:49:41.500800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:49.035 [2024-10-01 03:49:41.500827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:22:49.035 [2024-10-01 03:49:41.500835] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:49.035 [2024-10-01 03:49:41.500841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:49.035 [2024-10-01 03:49:41.500857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:49.035 [2024-10-01 03:49:41.500864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:22:49.035 [2024-10-01 03:49:41.500870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:49.035 [2024-10-01 03:49:41.500876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:49.035 [2024-10-01 03:49:41.500890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:49.035 [2024-10-01 03:49:41.500897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:22:49.035 [2024-10-01 03:49:41.500902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:22:49.035 [2024-10-01 03:49:41.500907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:49.035 [2024-10-01 03:49:41.500946] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.136 ms, result 0 00:22:49.035 true 00:22:49.035 03:49:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:22:49.297 { 00:22:49.297 "name": "ftl", 00:22:49.297 "properties": [ 00:22:49.297 { 00:22:49.297 "name": "superblock_version", 00:22:49.297 "value": 5, 00:22:49.297 "read-only": true 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "name": "base_device", 00:22:49.297 "bands": [ 00:22:49.297 { 00:22:49.297 "id": 0, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 1, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 2, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 3, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 4, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 5, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 6, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 7, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 8, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 9, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 10, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 11, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 12, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 13, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 14, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 15, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 16, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 17, 00:22:49.297 "state": "FREE", 00:22:49.297 "validity": 0.0 00:22:49.297 } 00:22:49.297 ], 00:22:49.297 "read-only": true 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "name": "cache_device", 00:22:49.297 "type": "bdev", 00:22:49.297 "chunks": [ 00:22:49.297 { 00:22:49.297 "id": 0, 00:22:49.297 "state": "INACTIVE", 00:22:49.297 "utilization": 0.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 1, 00:22:49.297 "state": "CLOSED", 00:22:49.297 "utilization": 1.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 2, 00:22:49.297 "state": "CLOSED", 00:22:49.297 "utilization": 1.0 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 3, 00:22:49.297 "state": "OPEN", 00:22:49.297 "utilization": 0.001953125 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "id": 4, 00:22:49.297 "state": "OPEN", 00:22:49.297 "utilization": 0.0 00:22:49.297 } 00:22:49.297 ], 00:22:49.297 "read-only": true 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "name": "verbose_mode", 00:22:49.297 "value": true, 00:22:49.297 "unit": "", 00:22:49.297 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:22:49.297 }, 00:22:49.297 { 00:22:49.297 "name": "prep_upgrade_on_shutdown", 00:22:49.297 "value": true, 00:22:49.297 "unit": "", 00:22:49.297 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:22:49.297 } 00:22:49.297 ] 00:22:49.297 } 00:22:49.297 03:49:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:22:49.297 03:49:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 77621 ]] 00:22:49.297 03:49:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 77621 00:22:49.297 03:49:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 77621 ']' 00:22:49.297 03:49:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 77621 00:22:49.297 03:49:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:22:49.298 03:49:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:49.298 03:49:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77621 00:22:49.298 killing process with pid 77621 00:22:49.298 03:49:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:49.298 03:49:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:49.298 03:49:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77621' 00:22:49.298 03:49:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 77621 00:22:49.298 03:49:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 77621 00:22:49.871 [2024-10-01 03:49:42.304429] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:22:49.871 [2024-10-01 03:49:42.316365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:49.872 [2024-10-01 03:49:42.316398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:22:49.872 [2024-10-01 03:49:42.316410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:22:49.872 [2024-10-01 03:49:42.316418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:49.872 [2024-10-01 03:49:42.316437] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:22:49.872 [2024-10-01 03:49:42.318652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:49.872 [2024-10-01 03:49:42.318681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:22:49.872 [2024-10-01 03:49:42.318690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.203 ms 00:22:49.872 [2024-10-01 03:49:42.318698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.878 [2024-10-01 03:49:50.698056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.878 [2024-10-01 03:49:50.698128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:22:59.878 [2024-10-01 03:49:50.698141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8379.302 ms 00:22:59.878 [2024-10-01 03:49:50.698149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.878 [2024-10-01 03:49:50.699320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.699335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:22:59.879 [2024-10-01 03:49:50.699344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.158 ms 00:22:59.879 [2024-10-01 03:49:50.699350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.700226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.700412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:22:59.879 [2024-10-01 03:49:50.700426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.848 ms 00:22:59.879 [2024-10-01 03:49:50.700434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.708377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.708407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:22:59.879 [2024-10-01 03:49:50.708416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.911 ms 00:22:59.879 [2024-10-01 03:49:50.708422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.713054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.713082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:22:59.879 [2024-10-01 03:49:50.713091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.604 ms 00:22:59.879 [2024-10-01 03:49:50.713102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.713148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.713155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:22:59.879 [2024-10-01 03:49:50.713163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:22:59.879 [2024-10-01 03:49:50.713170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.720320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.720345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:22:59.879 [2024-10-01 03:49:50.720352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.137 ms 00:22:59.879 [2024-10-01 03:49:50.720359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.727837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.727958] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:22:59.879 [2024-10-01 03:49:50.727970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.452 ms 00:22:59.879 [2024-10-01 03:49:50.727976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.735344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.735369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:22:59.879 [2024-10-01 03:49:50.735376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.343 ms 00:22:59.879 [2024-10-01 03:49:50.735381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.742543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.742648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:22:59.879 [2024-10-01 03:49:50.742660] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.103 ms 00:22:59.879 [2024-10-01 03:49:50.742665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.742688] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:22:59.879 [2024-10-01 03:49:50.742699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:22:59.879 [2024-10-01 03:49:50.742707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:22:59.879 [2024-10-01 03:49:50.742714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:22:59.879 [2024-10-01 03:49:50.742721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:59.879 [2024-10-01 03:49:50.742822] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:22:59.879 [2024-10-01 03:49:50.742828] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d93d0fd4-40b3-4195-8069-69c969fdae9f 00:22:59.879 [2024-10-01 03:49:50.742834] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:22:59.879 [2024-10-01 03:49:50.742843] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:22:59.879 [2024-10-01 03:49:50.742848] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:22:59.879 [2024-10-01 03:49:50.742857] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:22:59.879 [2024-10-01 03:49:50.742863] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:22:59.879 [2024-10-01 03:49:50.742869] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:22:59.879 [2024-10-01 03:49:50.742875] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:22:59.879 [2024-10-01 03:49:50.742880] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:22:59.879 [2024-10-01 03:49:50.742886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:22:59.879 [2024-10-01 03:49:50.742893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.742900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:22:59.879 [2024-10-01 03:49:50.742907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.206 ms 00:22:59.879 [2024-10-01 03:49:50.742912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.752883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.752990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:22:59.879 [2024-10-01 03:49:50.753013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.958 ms 00:22:59.879 [2024-10-01 03:49:50.753020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.753300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:22:59.879 [2024-10-01 03:49:50.753309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:22:59.879 [2024-10-01 03:49:50.753316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.265 ms 00:22:59.879 [2024-10-01 03:49:50.753322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.784238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.879 [2024-10-01 03:49:50.784337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:22:59.879 [2024-10-01 03:49:50.784349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.879 [2024-10-01 03:49:50.784355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.784382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.879 [2024-10-01 03:49:50.784389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:22:59.879 [2024-10-01 03:49:50.784396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.879 [2024-10-01 03:49:50.784402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.784471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.879 [2024-10-01 03:49:50.784480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:22:59.879 [2024-10-01 03:49:50.784486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.879 [2024-10-01 03:49:50.784493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.784507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.879 [2024-10-01 03:49:50.784513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:22:59.879 [2024-10-01 03:49:50.784519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.879 [2024-10-01 03:49:50.784525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.846483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.879 [2024-10-01 03:49:50.846524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:22:59.879 [2024-10-01 03:49:50.846533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.879 [2024-10-01 03:49:50.846540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.897162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.879 [2024-10-01 03:49:50.897202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:22:59.879 [2024-10-01 03:49:50.897211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.879 [2024-10-01 03:49:50.897218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.897285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.879 [2024-10-01 03:49:50.897298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:22:59.879 [2024-10-01 03:49:50.897305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.879 [2024-10-01 03:49:50.897312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.879 [2024-10-01 03:49:50.897360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.880 [2024-10-01 03:49:50.897369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:22:59.880 [2024-10-01 03:49:50.897375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.880 [2024-10-01 03:49:50.897381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.880 [2024-10-01 03:49:50.897457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.880 [2024-10-01 03:49:50.897468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:22:59.880 [2024-10-01 03:49:50.897475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.880 [2024-10-01 03:49:50.897481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.880 [2024-10-01 03:49:50.897510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.880 [2024-10-01 03:49:50.897518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:22:59.880 [2024-10-01 03:49:50.897524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.880 [2024-10-01 03:49:50.897530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.880 [2024-10-01 03:49:50.897567] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.880 [2024-10-01 03:49:50.897574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:22:59.880 [2024-10-01 03:49:50.897583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.880 [2024-10-01 03:49:50.897589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.880 [2024-10-01 03:49:50.897628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:22:59.880 [2024-10-01 03:49:50.897636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:22:59.880 [2024-10-01 03:49:50.897643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:22:59.880 [2024-10-01 03:49:50.897649] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:22:59.880 [2024-10-01 03:49:50.897759] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8581.342 ms, result 0 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78142 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78142 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78142 ']' 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:00.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:00.138 03:49:52 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:00.138 [2024-10-01 03:49:52.588598] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:00.138 [2024-10-01 03:49:52.588716] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78142 ] 00:23:00.396 [2024-10-01 03:49:52.737259] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.396 [2024-10-01 03:49:52.895215] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:01.332 [2024-10-01 03:49:53.520760] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:01.332 [2024-10-01 03:49:53.520816] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:01.332 [2024-10-01 03:49:53.665063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.332 [2024-10-01 03:49:53.665107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:01.332 [2024-10-01 03:49:53.665121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:01.332 [2024-10-01 03:49:53.665129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.332 [2024-10-01 03:49:53.665176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.332 [2024-10-01 03:49:53.665186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:01.332 [2024-10-01 03:49:53.665194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:23:01.332 [2024-10-01 03:49:53.665202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.332 [2024-10-01 03:49:53.665226] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:01.332 [2024-10-01 03:49:53.665924] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:01.332 [2024-10-01 03:49:53.665940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.332 [2024-10-01 03:49:53.665948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:01.332 [2024-10-01 03:49:53.665957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.723 ms 00:23:01.332 [2024-10-01 03:49:53.665967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.332 [2024-10-01 03:49:53.667307] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:23:01.332 [2024-10-01 03:49:53.680116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.332 [2024-10-01 03:49:53.680308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:23:01.332 [2024-10-01 03:49:53.680329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.810 ms 00:23:01.332 [2024-10-01 03:49:53.680338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.332 [2024-10-01 03:49:53.680649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.332 [2024-10-01 03:49:53.680687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:23:01.333 [2024-10-01 03:49:53.680700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:23:01.333 [2024-10-01 03:49:53.680709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.333 [2024-10-01 03:49:53.687136] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.333 [2024-10-01 03:49:53.687300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:01.333 [2024-10-01 03:49:53.687315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.338 ms 00:23:01.333 [2024-10-01 03:49:53.687325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.333 [2024-10-01 03:49:53.687385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.333 [2024-10-01 03:49:53.687394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:01.333 [2024-10-01 03:49:53.687407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:23:01.333 [2024-10-01 03:49:53.687414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.333 [2024-10-01 03:49:53.687459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.333 [2024-10-01 03:49:53.687470] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:01.333 [2024-10-01 03:49:53.687478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:23:01.333 [2024-10-01 03:49:53.687486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.333 [2024-10-01 03:49:53.687512] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:01.333 [2024-10-01 03:49:53.691117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.333 [2024-10-01 03:49:53.691145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:01.333 [2024-10-01 03:49:53.691154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.611 ms 00:23:01.333 [2024-10-01 03:49:53.691162] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.333 [2024-10-01 03:49:53.691187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.333 [2024-10-01 03:49:53.691196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:01.333 [2024-10-01 03:49:53.691207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:23:01.333 [2024-10-01 03:49:53.691214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.333 [2024-10-01 03:49:53.691249] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:23:01.333 [2024-10-01 03:49:53.691268] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:23:01.333 [2024-10-01 03:49:53.691304] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:23:01.333 [2024-10-01 03:49:53.691319] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:23:01.333 [2024-10-01 03:49:53.691425] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:01.333 [2024-10-01 03:49:53.691438] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:01.333 [2024-10-01 03:49:53.691449] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:01.333 [2024-10-01 03:49:53.691459] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:01.333 [2024-10-01 03:49:53.691469] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:01.333 [2024-10-01 03:49:53.691477] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:01.333 [2024-10-01 03:49:53.691484] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:01.333 [2024-10-01 03:49:53.691491] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:01.333 [2024-10-01 03:49:53.691499] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:01.333 [2024-10-01 03:49:53.691506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.333 [2024-10-01 03:49:53.691514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:01.333 [2024-10-01 03:49:53.691523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.260 ms 00:23:01.333 [2024-10-01 03:49:53.691533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.333 [2024-10-01 03:49:53.691617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.333 [2024-10-01 03:49:53.691625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:01.333 [2024-10-01 03:49:53.691633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:23:01.333 [2024-10-01 03:49:53.691640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.333 [2024-10-01 03:49:53.691756] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:01.333 [2024-10-01 03:49:53.691768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:01.333 [2024-10-01 03:49:53.691777] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:01.333 [2024-10-01 03:49:53.691785] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.333 [2024-10-01 03:49:53.691795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:01.333 [2024-10-01 03:49:53.691802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:01.333 [2024-10-01 03:49:53.691809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:01.333 [2024-10-01 03:49:53.691816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:01.333 [2024-10-01 03:49:53.691824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:01.333 [2024-10-01 03:49:53.691831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.333 [2024-10-01 03:49:53.691839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:01.333 [2024-10-01 03:49:53.691845] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:01.333 [2024-10-01 03:49:53.691852] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.333 [2024-10-01 03:49:53.691858] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:01.333 [2024-10-01 03:49:53.691865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:01.333 [2024-10-01 03:49:53.691873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.333 [2024-10-01 03:49:53.691880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:01.333 [2024-10-01 03:49:53.691886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:01.333 [2024-10-01 03:49:53.691893] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.333 [2024-10-01 03:49:53.691900] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:01.333 [2024-10-01 03:49:53.691907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:01.333 [2024-10-01 03:49:53.691914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:01.333 [2024-10-01 03:49:53.691921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:01.333 [2024-10-01 03:49:53.691927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:01.333 [2024-10-01 03:49:53.691939] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:01.333 [2024-10-01 03:49:53.691945] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:01.333 [2024-10-01 03:49:53.691952] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:01.333 [2024-10-01 03:49:53.691958] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:01.333 [2024-10-01 03:49:53.691965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:01.333 [2024-10-01 03:49:53.691972] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:01.333 [2024-10-01 03:49:53.691978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:01.333 [2024-10-01 03:49:53.691984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:01.333 [2024-10-01 03:49:53.691990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:01.333 [2024-10-01 03:49:53.691997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.333 [2024-10-01 03:49:53.692021] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:01.333 [2024-10-01 03:49:53.692028] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:01.333 [2024-10-01 03:49:53.692034] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.333 [2024-10-01 03:49:53.692041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:01.333 [2024-10-01 03:49:53.692048] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:01.333 [2024-10-01 03:49:53.692055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.333 [2024-10-01 03:49:53.692063] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:01.333 [2024-10-01 03:49:53.692069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:01.333 [2024-10-01 03:49:53.692075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.333 [2024-10-01 03:49:53.692081] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:01.333 [2024-10-01 03:49:53.692092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:01.333 [2024-10-01 03:49:53.692099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:01.333 [2024-10-01 03:49:53.692106] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:01.333 [2024-10-01 03:49:53.692118] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:01.333 [2024-10-01 03:49:53.692126] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:01.333 [2024-10-01 03:49:53.692132] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:01.333 [2024-10-01 03:49:53.692139] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:01.333 [2024-10-01 03:49:53.692145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:01.333 [2024-10-01 03:49:53.692152] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:01.333 [2024-10-01 03:49:53.692160] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:01.333 [2024-10-01 03:49:53.692169] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.333 [2024-10-01 03:49:53.692179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:01.333 [2024-10-01 03:49:53.692186] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:01.333 [2024-10-01 03:49:53.692193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:01.333 [2024-10-01 03:49:53.692200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:01.333 [2024-10-01 03:49:53.692207] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:01.333 [2024-10-01 03:49:53.692214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:01.334 [2024-10-01 03:49:53.692221] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:01.334 [2024-10-01 03:49:53.692228] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:01.334 [2024-10-01 03:49:53.692235] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:01.334 [2024-10-01 03:49:53.692243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:01.334 [2024-10-01 03:49:53.692250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:01.334 [2024-10-01 03:49:53.692257] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:01.334 [2024-10-01 03:49:53.692264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:01.334 [2024-10-01 03:49:53.692271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:01.334 [2024-10-01 03:49:53.692278] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:01.334 [2024-10-01 03:49:53.692286] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:01.334 [2024-10-01 03:49:53.692293] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:01.334 [2024-10-01 03:49:53.692300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:01.334 [2024-10-01 03:49:53.692307] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:01.334 [2024-10-01 03:49:53.692315] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:01.334 [2024-10-01 03:49:53.692322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:01.334 [2024-10-01 03:49:53.692330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:01.334 [2024-10-01 03:49:53.692339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.635 ms 00:23:01.334 [2024-10-01 03:49:53.692347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:01.334 [2024-10-01 03:49:53.692392] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:23:01.334 [2024-10-01 03:49:53.692402] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:23:03.868 [2024-10-01 03:49:56.120696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.120912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:23:03.868 [2024-10-01 03:49:56.120935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2428.293 ms 00:23:03.868 [2024-10-01 03:49:56.120951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.149522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.149573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:03.868 [2024-10-01 03:49:56.149586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 28.345 ms 00:23:03.868 [2024-10-01 03:49:56.149593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.149674] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.149685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:03.868 [2024-10-01 03:49:56.149694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:23:03.868 [2024-10-01 03:49:56.149702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.192605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.192652] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:03.868 [2024-10-01 03:49:56.192666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 42.864 ms 00:23:03.868 [2024-10-01 03:49:56.192675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.192715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.192725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:03.868 [2024-10-01 03:49:56.192734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:03.868 [2024-10-01 03:49:56.192741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.193342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.193365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:03.868 [2024-10-01 03:49:56.193376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.525 ms 00:23:03.868 [2024-10-01 03:49:56.193384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.193435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.193445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:03.868 [2024-10-01 03:49:56.193454] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.023 ms 00:23:03.868 [2024-10-01 03:49:56.193462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.210190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.210419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:03.868 [2024-10-01 03:49:56.210438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.705 ms 00:23:03.868 [2024-10-01 03:49:56.210456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.224784] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:03.868 [2024-10-01 03:49:56.224828] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:23:03.868 [2024-10-01 03:49:56.224842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.224852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:23:03.868 [2024-10-01 03:49:56.224862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.258 ms 00:23:03.868 [2024-10-01 03:49:56.224870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.239681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.239728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:23:03.868 [2024-10-01 03:49:56.239740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.760 ms 00:23:03.868 [2024-10-01 03:49:56.239749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.252011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.252055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:23:03.868 [2024-10-01 03:49:56.252068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.195 ms 00:23:03.868 [2024-10-01 03:49:56.252076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.264375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.264419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:23:03.868 [2024-10-01 03:49:56.264431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.249 ms 00:23:03.868 [2024-10-01 03:49:56.264440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.265135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.265167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:03.868 [2024-10-01 03:49:56.265180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.576 ms 00:23:03.868 [2024-10-01 03:49:56.265189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.338974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.339045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:23:03.868 [2024-10-01 03:49:56.339061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 73.763 ms 00:23:03.868 [2024-10-01 03:49:56.339072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.350284] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:03.868 [2024-10-01 03:49:56.351621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.351665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:03.868 [2024-10-01 03:49:56.351685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.471 ms 00:23:03.868 [2024-10-01 03:49:56.351694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.351825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.351840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:23:03.868 [2024-10-01 03:49:56.351851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:23:03.868 [2024-10-01 03:49:56.351861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.351927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.351939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:03.868 [2024-10-01 03:49:56.351948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.026 ms 00:23:03.868 [2024-10-01 03:49:56.351961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.351986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.351996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:03.868 [2024-10-01 03:49:56.352032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:03.868 [2024-10-01 03:49:56.352042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.868 [2024-10-01 03:49:56.352088] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:23:03.868 [2024-10-01 03:49:56.352101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.868 [2024-10-01 03:49:56.352110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:23:03.869 [2024-10-01 03:49:56.352120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:23:03.869 [2024-10-01 03:49:56.352130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.869 [2024-10-01 03:49:56.377733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.869 [2024-10-01 03:49:56.377930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:23:03.869 [2024-10-01 03:49:56.377953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.575 ms 00:23:03.869 [2024-10-01 03:49:56.377963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.869 [2024-10-01 03:49:56.378083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:03.869 [2024-10-01 03:49:56.378096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:03.869 [2024-10-01 03:49:56.378106] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.049 ms 00:23:03.869 [2024-10-01 03:49:56.378118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:03.869 [2024-10-01 03:49:56.379615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2713.923 ms, result 0 00:23:03.869 [2024-10-01 03:49:56.394357] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:03.869 [2024-10-01 03:49:56.410364] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:04.131 [2024-10-01 03:49:56.418617] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:04.391 03:49:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:04.391 03:49:56 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:04.391 03:49:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:04.391 03:49:56 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:23:04.392 03:49:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:23:04.654 [2024-10-01 03:49:57.027035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.654 [2024-10-01 03:49:57.027087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:23:04.654 [2024-10-01 03:49:57.027101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:04.654 [2024-10-01 03:49:57.027112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.654 [2024-10-01 03:49:57.027138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.654 [2024-10-01 03:49:57.027148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:23:04.654 [2024-10-01 03:49:57.027157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:04.654 [2024-10-01 03:49:57.027170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.654 [2024-10-01 03:49:57.027192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:04.654 [2024-10-01 03:49:57.027205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:23:04.654 [2024-10-01 03:49:57.027215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:04.654 [2024-10-01 03:49:57.027222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:04.654 [2024-10-01 03:49:57.027283] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.254 ms, result 0 00:23:04.654 true 00:23:04.654 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:04.915 { 00:23:04.915 "name": "ftl", 00:23:04.915 "properties": [ 00:23:04.915 { 00:23:04.915 "name": "superblock_version", 00:23:04.915 "value": 5, 00:23:04.915 "read-only": true 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "name": "base_device", 00:23:04.915 "bands": [ 00:23:04.915 { 00:23:04.915 "id": 0, 00:23:04.915 "state": "CLOSED", 00:23:04.915 "validity": 1.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 1, 00:23:04.915 "state": "CLOSED", 00:23:04.915 "validity": 1.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 2, 00:23:04.915 "state": "CLOSED", 00:23:04.915 "validity": 0.007843137254901933 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 3, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 4, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 5, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 6, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 7, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 8, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 9, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 10, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 11, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 12, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 13, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 14, 00:23:04.915 "state": "FREE", 00:23:04.915 "validity": 0.0 00:23:04.915 }, 00:23:04.915 { 00:23:04.915 "id": 15, 00:23:04.915 "state": "FREE", 00:23:04.916 "validity": 0.0 00:23:04.916 }, 00:23:04.916 { 00:23:04.916 "id": 16, 00:23:04.916 "state": "FREE", 00:23:04.916 "validity": 0.0 00:23:04.916 }, 00:23:04.916 { 00:23:04.916 "id": 17, 00:23:04.916 "state": "FREE", 00:23:04.916 "validity": 0.0 00:23:04.916 } 00:23:04.916 ], 00:23:04.916 "read-only": true 00:23:04.916 }, 00:23:04.916 { 00:23:04.916 "name": "cache_device", 00:23:04.916 "type": "bdev", 00:23:04.916 "chunks": [ 00:23:04.916 { 00:23:04.916 "id": 0, 00:23:04.916 "state": "INACTIVE", 00:23:04.916 "utilization": 0.0 00:23:04.916 }, 00:23:04.916 { 00:23:04.916 "id": 1, 00:23:04.916 "state": "OPEN", 00:23:04.916 "utilization": 0.0 00:23:04.916 }, 00:23:04.916 { 00:23:04.916 "id": 2, 00:23:04.916 "state": "OPEN", 00:23:04.916 "utilization": 0.0 00:23:04.916 }, 00:23:04.916 { 00:23:04.916 "id": 3, 00:23:04.916 "state": "FREE", 00:23:04.916 "utilization": 0.0 00:23:04.916 }, 00:23:04.916 { 00:23:04.916 "id": 4, 00:23:04.916 "state": "FREE", 00:23:04.916 "utilization": 0.0 00:23:04.916 } 00:23:04.916 ], 00:23:04.916 "read-only": true 00:23:04.916 }, 00:23:04.916 { 00:23:04.916 "name": "verbose_mode", 00:23:04.916 "value": true, 00:23:04.916 "unit": "", 00:23:04.916 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:23:04.916 }, 00:23:04.916 { 00:23:04.916 "name": "prep_upgrade_on_shutdown", 00:23:04.916 "value": false, 00:23:04.916 "unit": "", 00:23:04.916 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:23:04.916 } 00:23:04.916 ] 00:23:04.916 } 00:23:04.916 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:23:04.916 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:23:04.916 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:23:05.177 Validate MD5 checksum, iteration 1 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:05.177 03:49:57 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:05.437 [2024-10-01 03:49:57.740346] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:05.437 [2024-10-01 03:49:57.740608] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78216 ] 00:23:05.437 [2024-10-01 03:49:57.881708] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:05.698 [2024-10-01 03:49:58.062519] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:09.009  Copying: 655/1024 [MB] (655 MBps) Copying: 1024/1024 [MB] (average 667 MBps) 00:23:09.009 00:23:09.009 03:50:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:23:09.009 03:50:01 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=415ac4e05d7e8a6b5ebc9517913f0a82 00:23:10.910 Validate MD5 checksum, iteration 2 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 415ac4e05d7e8a6b5ebc9517913f0a82 != \4\1\5\a\c\4\e\0\5\d\7\e\8\a\6\b\5\e\b\c\9\5\1\7\9\1\3\f\0\a\8\2 ]] 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:10.910 03:50:03 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:11.169 [2024-10-01 03:50:03.489073] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:11.169 [2024-10-01 03:50:03.489318] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78283 ] 00:23:11.169 [2024-10-01 03:50:03.639436] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:11.428 [2024-10-01 03:50:03.811840] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.312  Copying: 653/1024 [MB] (653 MBps) Copying: 1024/1024 [MB] (average 663 MBps) 00:23:14.312 00:23:14.312 03:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:14.312 03:50:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ba5547434569a14f0f33de280c41ec6d 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ba5547434569a14f0f33de280c41ec6d != \b\a\5\5\4\7\4\3\4\5\6\9\a\1\4\f\0\f\3\3\d\e\2\8\0\c\4\1\e\c\6\d ]] 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 78142 ]] 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 78142 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=78339 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 78339 00:23:16.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@831 -- # '[' -z 78339 ']' 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:16.282 03:50:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:16.282 [2024-10-01 03:50:08.649461] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:16.283 [2024-10-01 03:50:08.649992] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78339 ] 00:23:16.283 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 830: 78142 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:23:16.283 [2024-10-01 03:50:08.793146] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.544 [2024-10-01 03:50:08.967188] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.113 [2024-10-01 03:50:09.589742] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:17.113 [2024-10-01 03:50:09.589959] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:23:17.375 [2024-10-01 03:50:09.734428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.375 [2024-10-01 03:50:09.734548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:23:17.375 [2024-10-01 03:50:09.734665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:17.375 [2024-10-01 03:50:09.734675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.375 [2024-10-01 03:50:09.734722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.375 [2024-10-01 03:50:09.734731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:17.375 [2024-10-01 03:50:09.734739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:23:17.375 [2024-10-01 03:50:09.734745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.375 [2024-10-01 03:50:09.734769] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:23:17.375 [2024-10-01 03:50:09.735314] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:23:17.375 [2024-10-01 03:50:09.735329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.375 [2024-10-01 03:50:09.735336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:17.375 [2024-10-01 03:50:09.735343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.569 ms 00:23:17.375 [2024-10-01 03:50:09.735351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.375 [2024-10-01 03:50:09.735579] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:23:17.375 [2024-10-01 03:50:09.748958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.375 [2024-10-01 03:50:09.749070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:23:17.375 [2024-10-01 03:50:09.749131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.380 ms 00:23:17.375 [2024-10-01 03:50:09.749150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.375 [2024-10-01 03:50:09.756134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.375 [2024-10-01 03:50:09.756219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:23:17.375 [2024-10-01 03:50:09.756263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:23:17.375 [2024-10-01 03:50:09.756280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.375 [2024-10-01 03:50:09.756545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.375 [2024-10-01 03:50:09.756729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:17.375 [2024-10-01 03:50:09.756750] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.192 ms 00:23:17.375 [2024-10-01 03:50:09.756766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.375 [2024-10-01 03:50:09.756819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.375 [2024-10-01 03:50:09.756838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:17.375 [2024-10-01 03:50:09.756853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:23:17.375 [2024-10-01 03:50:09.756867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.375 [2024-10-01 03:50:09.756899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.375 [2024-10-01 03:50:09.756960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:23:17.375 [2024-10-01 03:50:09.756981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:17.375 [2024-10-01 03:50:09.756997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.375 [2024-10-01 03:50:09.757040] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:23:17.375 [2024-10-01 03:50:09.759304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.375 [2024-10-01 03:50:09.759393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:17.375 [2024-10-01 03:50:09.759437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.269 ms 00:23:17.375 [2024-10-01 03:50:09.759455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.375 [2024-10-01 03:50:09.759489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.375 [2024-10-01 03:50:09.759505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:23:17.375 [2024-10-01 03:50:09.759521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:23:17.375 [2024-10-01 03:50:09.759534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.375 [2024-10-01 03:50:09.759559] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:23:17.375 [2024-10-01 03:50:09.759586] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:23:17.375 [2024-10-01 03:50:09.759660] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:23:17.375 [2024-10-01 03:50:09.759692] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:23:17.375 [2024-10-01 03:50:09.759820] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:23:17.376 [2024-10-01 03:50:09.759847] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:23:17.376 [2024-10-01 03:50:09.759894] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:23:17.376 [2024-10-01 03:50:09.760040] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:23:17.376 [2024-10-01 03:50:09.760064] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:23:17.376 [2024-10-01 03:50:09.760087] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:23:17.376 [2024-10-01 03:50:09.760105] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:23:17.376 [2024-10-01 03:50:09.760119] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:23:17.376 [2024-10-01 03:50:09.760133] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:23:17.376 [2024-10-01 03:50:09.760229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.376 [2024-10-01 03:50:09.760247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:23:17.376 [2024-10-01 03:50:09.760262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.672 ms 00:23:17.376 [2024-10-01 03:50:09.760278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.376 [2024-10-01 03:50:09.760355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.376 [2024-10-01 03:50:09.760372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:23:17.376 [2024-10-01 03:50:09.760388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:23:17.376 [2024-10-01 03:50:09.760500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.376 [2024-10-01 03:50:09.760603] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:23:17.376 [2024-10-01 03:50:09.760624] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:23:17.376 [2024-10-01 03:50:09.760750] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:17.376 [2024-10-01 03:50:09.760759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:17.376 [2024-10-01 03:50:09.760766] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:23:17.376 [2024-10-01 03:50:09.760772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:23:17.376 [2024-10-01 03:50:09.760777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:23:17.376 [2024-10-01 03:50:09.760782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:23:17.376 [2024-10-01 03:50:09.760788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:23:17.376 [2024-10-01 03:50:09.760793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:17.376 [2024-10-01 03:50:09.760798] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:23:17.376 [2024-10-01 03:50:09.760804] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:23:17.376 [2024-10-01 03:50:09.760809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:17.376 [2024-10-01 03:50:09.760814] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:23:17.376 [2024-10-01 03:50:09.760819] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:23:17.376 [2024-10-01 03:50:09.760824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:17.376 [2024-10-01 03:50:09.760830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:23:17.376 [2024-10-01 03:50:09.760835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:23:17.376 [2024-10-01 03:50:09.760840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:17.376 [2024-10-01 03:50:09.760845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:23:17.376 [2024-10-01 03:50:09.760851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:23:17.376 [2024-10-01 03:50:09.760856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:17.376 [2024-10-01 03:50:09.760867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:23:17.376 [2024-10-01 03:50:09.760873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:23:17.376 [2024-10-01 03:50:09.760882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:17.376 [2024-10-01 03:50:09.760888] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:23:17.376 [2024-10-01 03:50:09.760892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:23:17.376 [2024-10-01 03:50:09.760897] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:17.376 [2024-10-01 03:50:09.760902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:23:17.376 [2024-10-01 03:50:09.760907] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:23:17.376 [2024-10-01 03:50:09.760912] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:23:17.376 [2024-10-01 03:50:09.760917] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:23:17.376 [2024-10-01 03:50:09.760922] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:23:17.376 [2024-10-01 03:50:09.760928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:17.376 [2024-10-01 03:50:09.760934] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:23:17.376 [2024-10-01 03:50:09.760939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:23:17.376 [2024-10-01 03:50:09.760944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:17.376 [2024-10-01 03:50:09.760949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:23:17.376 [2024-10-01 03:50:09.760954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:23:17.376 [2024-10-01 03:50:09.760960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:17.376 [2024-10-01 03:50:09.760965] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:23:17.376 [2024-10-01 03:50:09.760970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:23:17.376 [2024-10-01 03:50:09.760975] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:17.376 [2024-10-01 03:50:09.760980] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:23:17.376 [2024-10-01 03:50:09.760987] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:23:17.376 [2024-10-01 03:50:09.760993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:23:17.376 [2024-10-01 03:50:09.760999] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:23:17.376 [2024-10-01 03:50:09.761019] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:23:17.376 [2024-10-01 03:50:09.761024] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:23:17.376 [2024-10-01 03:50:09.761029] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:23:17.376 [2024-10-01 03:50:09.761035] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:23:17.376 [2024-10-01 03:50:09.761040] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:23:17.376 [2024-10-01 03:50:09.761045] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:23:17.376 [2024-10-01 03:50:09.761052] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:23:17.376 [2024-10-01 03:50:09.761059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:17.376 [2024-10-01 03:50:09.761066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:23:17.376 [2024-10-01 03:50:09.761077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:23:17.376 [2024-10-01 03:50:09.761082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:23:17.376 [2024-10-01 03:50:09.761088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:23:17.376 [2024-10-01 03:50:09.761093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:23:17.376 [2024-10-01 03:50:09.761099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:23:17.376 [2024-10-01 03:50:09.761104] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:23:17.376 [2024-10-01 03:50:09.761110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:23:17.376 [2024-10-01 03:50:09.761115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:23:17.376 [2024-10-01 03:50:09.761121] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:23:17.376 [2024-10-01 03:50:09.761126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:23:17.376 [2024-10-01 03:50:09.761131] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:23:17.376 [2024-10-01 03:50:09.761137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:23:17.376 [2024-10-01 03:50:09.761143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:23:17.376 [2024-10-01 03:50:09.761149] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:23:17.376 [2024-10-01 03:50:09.761155] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:17.376 [2024-10-01 03:50:09.761162] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:17.376 [2024-10-01 03:50:09.761167] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:23:17.376 [2024-10-01 03:50:09.761173] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:23:17.376 [2024-10-01 03:50:09.761178] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:23:17.376 [2024-10-01 03:50:09.761185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.376 [2024-10-01 03:50:09.761190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:23:17.376 [2024-10-01 03:50:09.761196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.634 ms 00:23:17.376 [2024-10-01 03:50:09.761201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.376 [2024-10-01 03:50:09.782326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.376 [2024-10-01 03:50:09.782353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:17.376 [2024-10-01 03:50:09.782362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.083 ms 00:23:17.376 [2024-10-01 03:50:09.782368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.376 [2024-10-01 03:50:09.782398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.376 [2024-10-01 03:50:09.782405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:23:17.377 [2024-10-01 03:50:09.782428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:23:17.377 [2024-10-01 03:50:09.782434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.822936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.822968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:17.377 [2024-10-01 03:50:09.822978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.462 ms 00:23:17.377 [2024-10-01 03:50:09.822985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.823027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.823035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:17.377 [2024-10-01 03:50:09.823042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:23:17.377 [2024-10-01 03:50:09.823048] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.823131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.823141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:17.377 [2024-10-01 03:50:09.823149] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.039 ms 00:23:17.377 [2024-10-01 03:50:09.823177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.823211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.823222] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:17.377 [2024-10-01 03:50:09.823228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.019 ms 00:23:17.377 [2024-10-01 03:50:09.823235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.835508] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.835535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:17.377 [2024-10-01 03:50:09.835543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.257 ms 00:23:17.377 [2024-10-01 03:50:09.835549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.835639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.835648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:23:17.377 [2024-10-01 03:50:09.835655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:17.377 [2024-10-01 03:50:09.835661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.849489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.849516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:23:17.377 [2024-10-01 03:50:09.849525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.812 ms 00:23:17.377 [2024-10-01 03:50:09.849535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.856736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.856760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:23:17.377 [2024-10-01 03:50:09.856767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.396 ms 00:23:17.377 [2024-10-01 03:50:09.856773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.906083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.906116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:23:17.377 [2024-10-01 03:50:09.906126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 49.271 ms 00:23:17.377 [2024-10-01 03:50:09.906133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.906254] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:23:17.377 [2024-10-01 03:50:09.906352] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:23:17.377 [2024-10-01 03:50:09.906458] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:23:17.377 [2024-10-01 03:50:09.906554] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:23:17.377 [2024-10-01 03:50:09.906565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.906572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:23:17.377 [2024-10-01 03:50:09.906582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.400 ms 00:23:17.377 [2024-10-01 03:50:09.906589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.906627] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:23:17.377 [2024-10-01 03:50:09.906637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.906644] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:23:17.377 [2024-10-01 03:50:09.906650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.011 ms 00:23:17.377 [2024-10-01 03:50:09.906657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.377 [2024-10-01 03:50:09.918780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.377 [2024-10-01 03:50:09.918807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:23:17.377 [2024-10-01 03:50:09.918817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.108 ms 00:23:17.377 [2024-10-01 03:50:09.918823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.638 [2024-10-01 03:50:09.925200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.638 [2024-10-01 03:50:09.925223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:23:17.638 [2024-10-01 03:50:09.925231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:17.638 [2024-10-01 03:50:09.925239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:17.638 [2024-10-01 03:50:09.925300] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:23:17.638 [2024-10-01 03:50:09.925447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:17.638 [2024-10-01 03:50:09.925457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:23:17.638 [2024-10-01 03:50:09.925464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.148 ms 00:23:17.638 [2024-10-01 03:50:09.925471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:18.212 [2024-10-01 03:50:10.556545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:18.212 [2024-10-01 03:50:10.556650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:23:18.212 [2024-10-01 03:50:10.556670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 630.470 ms 00:23:18.212 [2024-10-01 03:50:10.556681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:18.212 [2024-10-01 03:50:10.562096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:18.212 [2024-10-01 03:50:10.562158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:23:18.212 [2024-10-01 03:50:10.562170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.970 ms 00:23:18.212 [2024-10-01 03:50:10.562180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:18.212 [2024-10-01 03:50:10.563069] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:23:18.212 [2024-10-01 03:50:10.563116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:18.212 [2024-10-01 03:50:10.563127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:23:18.212 [2024-10-01 03:50:10.563140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.898 ms 00:23:18.212 [2024-10-01 03:50:10.563149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:18.213 [2024-10-01 03:50:10.563201] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:18.213 [2024-10-01 03:50:10.563211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:23:18.213 [2024-10-01 03:50:10.563222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:23:18.213 [2024-10-01 03:50:10.563230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:18.213 [2024-10-01 03:50:10.563268] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 637.960 ms, result 0 00:23:18.213 [2024-10-01 03:50:10.563315] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:23:18.213 [2024-10-01 03:50:10.563617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:18.213 [2024-10-01 03:50:10.563745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:23:18.213 [2024-10-01 03:50:10.563761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.303 ms 00:23:18.213 [2024-10-01 03:50:10.563770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.336334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.336368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:23:19.157 [2024-10-01 03:50:11.336377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 771.295 ms 00:23:19.157 [2024-10-01 03:50:11.336384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.339513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.339540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:23:19.157 [2024-10-01 03:50:11.339547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.934 ms 00:23:19.157 [2024-10-01 03:50:11.339554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.340084] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:23:19.157 [2024-10-01 03:50:11.340110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.340116] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:23:19.157 [2024-10-01 03:50:11.340123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.533 ms 00:23:19.157 [2024-10-01 03:50:11.340129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.340152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.340159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:23:19.157 [2024-10-01 03:50:11.340165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:23:19.157 [2024-10-01 03:50:11.340171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.340196] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 776.885 ms, result 0 00:23:19.157 [2024-10-01 03:50:11.340229] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:19.157 [2024-10-01 03:50:11.340239] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:23:19.157 [2024-10-01 03:50:11.340247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.340253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:23:19.157 [2024-10-01 03:50:11.340263] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1414.958 ms 00:23:19.157 [2024-10-01 03:50:11.340270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.340295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.340303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:23:19.157 [2024-10-01 03:50:11.340309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:19.157 [2024-10-01 03:50:11.340315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.349523] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:23:19.157 [2024-10-01 03:50:11.349766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.349779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:23:19.157 [2024-10-01 03:50:11.349787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.439 ms 00:23:19.157 [2024-10-01 03:50:11.349794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.350359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.350374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:23:19.157 [2024-10-01 03:50:11.350381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.510 ms 00:23:19.157 [2024-10-01 03:50:11.350387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.352116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.352134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:23:19.157 [2024-10-01 03:50:11.352141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.716 ms 00:23:19.157 [2024-10-01 03:50:11.352147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.352179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.352186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:23:19.157 [2024-10-01 03:50:11.352193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:23:19.157 [2024-10-01 03:50:11.352198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.352279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.352288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:23:19.157 [2024-10-01 03:50:11.352294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:23:19.157 [2024-10-01 03:50:11.352300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.352317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.352325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:23:19.157 [2024-10-01 03:50:11.352331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:23:19.157 [2024-10-01 03:50:11.352337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.352364] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:23:19.157 [2024-10-01 03:50:11.352371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.352377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:23:19.157 [2024-10-01 03:50:11.352384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:23:19.157 [2024-10-01 03:50:11.352391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.352433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:19.157 [2024-10-01 03:50:11.352442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:23:19.157 [2024-10-01 03:50:11.352451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:23:19.157 [2024-10-01 03:50:11.352456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:19.157 [2024-10-01 03:50:11.353474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1618.514 ms, result 0 00:23:19.157 [2024-10-01 03:50:11.366040] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.157 [2024-10-01 03:50:11.382044] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:23:19.157 [2024-10-01 03:50:11.390168] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:23:19.157 03:50:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:19.157 03:50:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # return 0 00:23:19.157 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:23:19.157 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:23:19.157 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:23:19.157 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:23:19.157 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:23:19.157 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:19.157 Validate MD5 checksum, iteration 1 00:23:19.158 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:23:19.158 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:19.158 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:19.158 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:19.158 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:19.158 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:19.158 03:50:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:23:19.158 [2024-10-01 03:50:11.492478] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:19.158 [2024-10-01 03:50:11.492592] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78379 ] 00:23:19.158 [2024-10-01 03:50:11.642532] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.419 [2024-10-01 03:50:11.817761] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.270  Copying: 635/1024 [MB] (635 MBps) Copying: 1024/1024 [MB] (average 640 MBps) 00:23:24.270 00:23:24.270 03:50:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:23:24.270 03:50:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:26.816 Validate MD5 checksum, iteration 2 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=415ac4e05d7e8a6b5ebc9517913f0a82 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 415ac4e05d7e8a6b5ebc9517913f0a82 != \4\1\5\a\c\4\e\0\5\d\7\e\8\a\6\b\5\e\b\c\9\5\1\7\9\1\3\f\0\a\8\2 ]] 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:23:26.816 03:50:18 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:23:26.816 [2024-10-01 03:50:18.953245] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:26.816 [2024-10-01 03:50:18.953359] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78458 ] 00:23:26.816 [2024-10-01 03:50:19.100859] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:26.816 [2024-10-01 03:50:19.236534] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 1 00:23:31.303  Copying: 671/1024 [MB] (671 MBps) Copying: 1024/1024 [MB] (average 698 MBps) 00:23:31.303 00:23:31.303 03:50:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:23:31.303 03:50:23 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=ba5547434569a14f0f33de280c41ec6d 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ ba5547434569a14f0f33de280c41ec6d != \b\a\5\5\4\7\4\3\4\5\6\9\a\1\4\f\0\f\3\3\d\e\2\8\0\c\4\1\e\c\6\d ]] 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 78339 ]] 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 78339 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@950 -- # '[' -z 78339 ']' 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # kill -0 78339 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # uname 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78339 00:23:33.215 killing process with pid 78339 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78339' 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@969 -- # kill 78339 00:23:33.215 03:50:25 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@974 -- # wait 78339 00:23:33.787 [2024-10-01 03:50:26.191464] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:23:33.787 [2024-10-01 03:50:26.202351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.787 [2024-10-01 03:50:26.202400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:23:33.788 [2024-10-01 03:50:26.202413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:23:33.788 [2024-10-01 03:50:26.202423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.202444] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:23:33.788 [2024-10-01 03:50:26.204599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.204625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:23:33.788 [2024-10-01 03:50:26.204634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.144 ms 00:23:33.788 [2024-10-01 03:50:26.204641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.204850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.204860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:23:33.788 [2024-10-01 03:50:26.204867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.190 ms 00:23:33.788 [2024-10-01 03:50:26.204873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.206556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.206588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:23:33.788 [2024-10-01 03:50:26.206597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.669 ms 00:23:33.788 [2024-10-01 03:50:26.206603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.207473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.207490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:23:33.788 [2024-10-01 03:50:26.207497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.841 ms 00:23:33.788 [2024-10-01 03:50:26.207503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.215632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.215659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:23:33.788 [2024-10-01 03:50:26.215668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.095 ms 00:23:33.788 [2024-10-01 03:50:26.215674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.219987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.220173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:23:33.788 [2024-10-01 03:50:26.220187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.285 ms 00:23:33.788 [2024-10-01 03:50:26.220193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.220270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.220283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:23:33.788 [2024-10-01 03:50:26.220291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:23:33.788 [2024-10-01 03:50:26.220298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.227780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.227863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:23:33.788 [2024-10-01 03:50:26.227908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.468 ms 00:23:33.788 [2024-10-01 03:50:26.227925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.235298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.235381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:23:33.788 [2024-10-01 03:50:26.235424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.339 ms 00:23:33.788 [2024-10-01 03:50:26.235440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.242475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.242564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:23:33.788 [2024-10-01 03:50:26.242607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.002 ms 00:23:33.788 [2024-10-01 03:50:26.242624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.250200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.250280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:23:33.788 [2024-10-01 03:50:26.250316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.523 ms 00:23:33.788 [2024-10-01 03:50:26.250332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.250364] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:23:33.788 [2024-10-01 03:50:26.250400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:23:33.788 [2024-10-01 03:50:26.250425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:23:33.788 [2024-10-01 03:50:26.250447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:23:33.788 [2024-10-01 03:50:26.250471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.250526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.250549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.250571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.250593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.250615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.250637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.250952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.251047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.251092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.251117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.251139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.251190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.251216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.251238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:33.788 [2024-10-01 03:50:26.251262] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:23:33.788 [2024-10-01 03:50:26.251299] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: d93d0fd4-40b3-4195-8069-69c969fdae9f 00:23:33.788 [2024-10-01 03:50:26.251325] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:23:33.788 [2024-10-01 03:50:26.251339] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:23:33.788 [2024-10-01 03:50:26.251470] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:23:33.788 [2024-10-01 03:50:26.251494] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:23:33.788 [2024-10-01 03:50:26.251509] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:23:33.788 [2024-10-01 03:50:26.251524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:23:33.788 [2024-10-01 03:50:26.251558] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:23:33.788 [2024-10-01 03:50:26.251575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:23:33.788 [2024-10-01 03:50:26.251588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:23:33.788 [2024-10-01 03:50:26.251604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.251620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:23:33.788 [2024-10-01 03:50:26.251723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.241 ms 00:23:33.788 [2024-10-01 03:50:26.251740] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.261836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.261982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:23:33.788 [2024-10-01 03:50:26.262068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.064 ms 00:23:33.788 [2024-10-01 03:50:26.262089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.262399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:23:33.788 [2024-10-01 03:50:26.262426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:23:33.788 [2024-10-01 03:50:26.262470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.284 ms 00:23:33.788 [2024-10-01 03:50:26.262488] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.293539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:33.788 [2024-10-01 03:50:26.293632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:23:33.788 [2024-10-01 03:50:26.293670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:33.788 [2024-10-01 03:50:26.293687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.293724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:33.788 [2024-10-01 03:50:26.293742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:23:33.788 [2024-10-01 03:50:26.293757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:33.788 [2024-10-01 03:50:26.293772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.293853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:33.788 [2024-10-01 03:50:26.293874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:23:33.788 [2024-10-01 03:50:26.293926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:33.788 [2024-10-01 03:50:26.293944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:33.788 [2024-10-01 03:50:26.293968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:33.788 [2024-10-01 03:50:26.293984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:23:33.788 [2024-10-01 03:50:26.293999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:33.788 [2024-10-01 03:50:26.294025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.050 [2024-10-01 03:50:26.357542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:34.050 [2024-10-01 03:50:26.357654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:23:34.050 [2024-10-01 03:50:26.357694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:34.050 [2024-10-01 03:50:26.357712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.050 [2024-10-01 03:50:26.409262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:34.050 [2024-10-01 03:50:26.409372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:23:34.050 [2024-10-01 03:50:26.409410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:34.050 [2024-10-01 03:50:26.409428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.050 [2024-10-01 03:50:26.409502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:34.050 [2024-10-01 03:50:26.409521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:23:34.050 [2024-10-01 03:50:26.409538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:34.050 [2024-10-01 03:50:26.409557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.050 [2024-10-01 03:50:26.409622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:34.050 [2024-10-01 03:50:26.409681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:23:34.050 [2024-10-01 03:50:26.409703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:34.050 [2024-10-01 03:50:26.409719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.050 [2024-10-01 03:50:26.409810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:34.050 [2024-10-01 03:50:26.410227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:23:34.050 [2024-10-01 03:50:26.410262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:34.050 [2024-10-01 03:50:26.410280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.050 [2024-10-01 03:50:26.410333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:34.050 [2024-10-01 03:50:26.410456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:23:34.050 [2024-10-01 03:50:26.410476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:34.050 [2024-10-01 03:50:26.410492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.050 [2024-10-01 03:50:26.410561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:34.050 [2024-10-01 03:50:26.410582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:23:34.050 [2024-10-01 03:50:26.410624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:34.050 [2024-10-01 03:50:26.410642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.050 [2024-10-01 03:50:26.410703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:23:34.050 [2024-10-01 03:50:26.410746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:23:34.050 [2024-10-01 03:50:26.410764] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:23:34.050 [2024-10-01 03:50:26.410780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:23:34.050 [2024-10-01 03:50:26.410908] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 208.524 ms, result 0 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:34.994 Remove shared memory files 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid78142 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:23:34.994 ************************************ 00:23:34.994 END TEST ftl_upgrade_shutdown 00:23:34.994 ************************************ 00:23:34.994 00:23:34.994 real 1m17.871s 00:23:34.994 user 1m48.379s 00:23:34.994 sys 0m19.021s 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:34.994 03:50:27 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:34.994 03:50:27 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:23:34.994 03:50:27 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:34.994 03:50:27 ftl -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:23:34.994 03:50:27 ftl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:34.994 03:50:27 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:34.994 ************************************ 00:23:34.994 START TEST ftl_restore_fast 00:23:34.994 ************************************ 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:23:34.994 * Looking for test storage... 00:23:34.994 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lcov --version 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:23:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.994 --rc genhtml_branch_coverage=1 00:23:34.994 --rc genhtml_function_coverage=1 00:23:34.994 --rc genhtml_legend=1 00:23:34.994 --rc geninfo_all_blocks=1 00:23:34.994 --rc geninfo_unexecuted_blocks=1 00:23:34.994 00:23:34.994 ' 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:23:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.994 --rc genhtml_branch_coverage=1 00:23:34.994 --rc genhtml_function_coverage=1 00:23:34.994 --rc genhtml_legend=1 00:23:34.994 --rc geninfo_all_blocks=1 00:23:34.994 --rc geninfo_unexecuted_blocks=1 00:23:34.994 00:23:34.994 ' 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:23:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.994 --rc genhtml_branch_coverage=1 00:23:34.994 --rc genhtml_function_coverage=1 00:23:34.994 --rc genhtml_legend=1 00:23:34.994 --rc geninfo_all_blocks=1 00:23:34.994 --rc geninfo_unexecuted_blocks=1 00:23:34.994 00:23:34.994 ' 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:23:34.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:34.994 --rc genhtml_branch_coverage=1 00:23:34.994 --rc genhtml_function_coverage=1 00:23:34.994 --rc genhtml_legend=1 00:23:34.994 --rc geninfo_all_blocks=1 00:23:34.994 --rc geninfo_unexecuted_blocks=1 00:23:34.994 00:23:34.994 ' 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:34.994 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.d31CZLHeks 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=78625 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 78625 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@831 -- # '[' -z 78625 ']' 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:34.995 03:50:27 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:23:34.995 [2024-10-01 03:50:27.511528] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:34.995 [2024-10-01 03:50:27.511750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78625 ] 00:23:35.255 [2024-10-01 03:50:27.653684] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.516 [2024-10-01 03:50:27.820619] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # return 0 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=nvme0n1 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:36.088 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:36.350 { 00:23:36.350 "name": "nvme0n1", 00:23:36.350 "aliases": [ 00:23:36.350 "b2b990be-683b-4751-a6b2-4da6f97aa641" 00:23:36.350 ], 00:23:36.350 "product_name": "NVMe disk", 00:23:36.350 "block_size": 4096, 00:23:36.350 "num_blocks": 1310720, 00:23:36.350 "uuid": "b2b990be-683b-4751-a6b2-4da6f97aa641", 00:23:36.350 "numa_id": -1, 00:23:36.350 "assigned_rate_limits": { 00:23:36.350 "rw_ios_per_sec": 0, 00:23:36.350 "rw_mbytes_per_sec": 0, 00:23:36.350 "r_mbytes_per_sec": 0, 00:23:36.350 "w_mbytes_per_sec": 0 00:23:36.350 }, 00:23:36.350 "claimed": true, 00:23:36.350 "claim_type": "read_many_write_one", 00:23:36.350 "zoned": false, 00:23:36.350 "supported_io_types": { 00:23:36.350 "read": true, 00:23:36.350 "write": true, 00:23:36.350 "unmap": true, 00:23:36.350 "flush": true, 00:23:36.350 "reset": true, 00:23:36.350 "nvme_admin": true, 00:23:36.350 "nvme_io": true, 00:23:36.350 "nvme_io_md": false, 00:23:36.350 "write_zeroes": true, 00:23:36.350 "zcopy": false, 00:23:36.350 "get_zone_info": false, 00:23:36.350 "zone_management": false, 00:23:36.350 "zone_append": false, 00:23:36.350 "compare": true, 00:23:36.350 "compare_and_write": false, 00:23:36.350 "abort": true, 00:23:36.350 "seek_hole": false, 00:23:36.350 "seek_data": false, 00:23:36.350 "copy": true, 00:23:36.350 "nvme_iov_md": false 00:23:36.350 }, 00:23:36.350 "driver_specific": { 00:23:36.350 "nvme": [ 00:23:36.350 { 00:23:36.350 "pci_address": "0000:00:11.0", 00:23:36.350 "trid": { 00:23:36.350 "trtype": "PCIe", 00:23:36.350 "traddr": "0000:00:11.0" 00:23:36.350 }, 00:23:36.350 "ctrlr_data": { 00:23:36.350 "cntlid": 0, 00:23:36.350 "vendor_id": "0x1b36", 00:23:36.350 "model_number": "QEMU NVMe Ctrl", 00:23:36.350 "serial_number": "12341", 00:23:36.350 "firmware_revision": "8.0.0", 00:23:36.350 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:36.350 "oacs": { 00:23:36.350 "security": 0, 00:23:36.350 "format": 1, 00:23:36.350 "firmware": 0, 00:23:36.350 "ns_manage": 1 00:23:36.350 }, 00:23:36.350 "multi_ctrlr": false, 00:23:36.350 "ana_reporting": false 00:23:36.350 }, 00:23:36.350 "vs": { 00:23:36.350 "nvme_version": "1.4" 00:23:36.350 }, 00:23:36.350 "ns_data": { 00:23:36.350 "id": 1, 00:23:36.350 "can_share": false 00:23:36.350 } 00:23:36.350 } 00:23:36.350 ], 00:23:36.350 "mp_policy": "active_passive" 00:23:36.350 } 00:23:36.350 } 00:23:36.350 ]' 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=1310720 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=5120 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 5120 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:36.350 03:50:28 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:36.614 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=cd48bc65-5440-47ef-9f09-f7cc53b317a7 00:23:36.614 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:23:36.614 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cd48bc65-5440-47ef-9f09-f7cc53b317a7 00:23:36.875 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:37.135 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=5a51668e-cbea-4ddd-a7ff-815d46a3004c 00:23:37.135 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5a51668e-cbea-4ddd-a7ff-815d46a3004c 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size 14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:37.395 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:37.395 { 00:23:37.395 "name": "14c752f3-57a3-4989-b7d1-0012d7ccf7a9", 00:23:37.395 "aliases": [ 00:23:37.395 "lvs/nvme0n1p0" 00:23:37.395 ], 00:23:37.395 "product_name": "Logical Volume", 00:23:37.395 "block_size": 4096, 00:23:37.395 "num_blocks": 26476544, 00:23:37.395 "uuid": "14c752f3-57a3-4989-b7d1-0012d7ccf7a9", 00:23:37.395 "assigned_rate_limits": { 00:23:37.395 "rw_ios_per_sec": 0, 00:23:37.395 "rw_mbytes_per_sec": 0, 00:23:37.395 "r_mbytes_per_sec": 0, 00:23:37.395 "w_mbytes_per_sec": 0 00:23:37.395 }, 00:23:37.395 "claimed": false, 00:23:37.395 "zoned": false, 00:23:37.395 "supported_io_types": { 00:23:37.395 "read": true, 00:23:37.395 "write": true, 00:23:37.395 "unmap": true, 00:23:37.395 "flush": false, 00:23:37.395 "reset": true, 00:23:37.395 "nvme_admin": false, 00:23:37.395 "nvme_io": false, 00:23:37.395 "nvme_io_md": false, 00:23:37.395 "write_zeroes": true, 00:23:37.395 "zcopy": false, 00:23:37.395 "get_zone_info": false, 00:23:37.395 "zone_management": false, 00:23:37.395 "zone_append": false, 00:23:37.395 "compare": false, 00:23:37.395 "compare_and_write": false, 00:23:37.395 "abort": false, 00:23:37.395 "seek_hole": true, 00:23:37.395 "seek_data": true, 00:23:37.395 "copy": false, 00:23:37.395 "nvme_iov_md": false 00:23:37.395 }, 00:23:37.395 "driver_specific": { 00:23:37.395 "lvol": { 00:23:37.395 "lvol_store_uuid": "5a51668e-cbea-4ddd-a7ff-815d46a3004c", 00:23:37.396 "base_bdev": "nvme0n1", 00:23:37.396 "thin_provision": true, 00:23:37.396 "num_allocated_clusters": 0, 00:23:37.396 "snapshot": false, 00:23:37.396 "clone": false, 00:23:37.396 "esnap_clone": false 00:23:37.396 } 00:23:37.396 } 00:23:37.396 } 00:23:37.396 ]' 00:23:37.396 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:37.396 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:37.396 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:37.655 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:37.655 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:37.655 03:50:29 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:37.655 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:23:37.655 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:23:37.656 03:50:29 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size 14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:37.916 { 00:23:37.916 "name": "14c752f3-57a3-4989-b7d1-0012d7ccf7a9", 00:23:37.916 "aliases": [ 00:23:37.916 "lvs/nvme0n1p0" 00:23:37.916 ], 00:23:37.916 "product_name": "Logical Volume", 00:23:37.916 "block_size": 4096, 00:23:37.916 "num_blocks": 26476544, 00:23:37.916 "uuid": "14c752f3-57a3-4989-b7d1-0012d7ccf7a9", 00:23:37.916 "assigned_rate_limits": { 00:23:37.916 "rw_ios_per_sec": 0, 00:23:37.916 "rw_mbytes_per_sec": 0, 00:23:37.916 "r_mbytes_per_sec": 0, 00:23:37.916 "w_mbytes_per_sec": 0 00:23:37.916 }, 00:23:37.916 "claimed": false, 00:23:37.916 "zoned": false, 00:23:37.916 "supported_io_types": { 00:23:37.916 "read": true, 00:23:37.916 "write": true, 00:23:37.916 "unmap": true, 00:23:37.916 "flush": false, 00:23:37.916 "reset": true, 00:23:37.916 "nvme_admin": false, 00:23:37.916 "nvme_io": false, 00:23:37.916 "nvme_io_md": false, 00:23:37.916 "write_zeroes": true, 00:23:37.916 "zcopy": false, 00:23:37.916 "get_zone_info": false, 00:23:37.916 "zone_management": false, 00:23:37.916 "zone_append": false, 00:23:37.916 "compare": false, 00:23:37.916 "compare_and_write": false, 00:23:37.916 "abort": false, 00:23:37.916 "seek_hole": true, 00:23:37.916 "seek_data": true, 00:23:37.916 "copy": false, 00:23:37.916 "nvme_iov_md": false 00:23:37.916 }, 00:23:37.916 "driver_specific": { 00:23:37.916 "lvol": { 00:23:37.916 "lvol_store_uuid": "5a51668e-cbea-4ddd-a7ff-815d46a3004c", 00:23:37.916 "base_bdev": "nvme0n1", 00:23:37.916 "thin_provision": true, 00:23:37.916 "num_allocated_clusters": 0, 00:23:37.916 "snapshot": false, 00:23:37.916 "clone": false, 00:23:37.916 "esnap_clone": false 00:23:37.916 } 00:23:37.916 } 00:23:37.916 } 00:23:37.916 ]' 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:23:37.916 03:50:30 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:38.176 03:50:30 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:23:38.176 03:50:30 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size 14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:38.176 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1378 -- # local bdev_name=14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:38.176 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1379 -- # local bdev_info 00:23:38.176 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1380 -- # local bs 00:23:38.176 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1381 -- # local nb 00:23:38.176 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 14c752f3-57a3-4989-b7d1-0012d7ccf7a9 00:23:38.436 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:23:38.436 { 00:23:38.436 "name": "14c752f3-57a3-4989-b7d1-0012d7ccf7a9", 00:23:38.436 "aliases": [ 00:23:38.436 "lvs/nvme0n1p0" 00:23:38.436 ], 00:23:38.436 "product_name": "Logical Volume", 00:23:38.436 "block_size": 4096, 00:23:38.436 "num_blocks": 26476544, 00:23:38.436 "uuid": "14c752f3-57a3-4989-b7d1-0012d7ccf7a9", 00:23:38.436 "assigned_rate_limits": { 00:23:38.436 "rw_ios_per_sec": 0, 00:23:38.436 "rw_mbytes_per_sec": 0, 00:23:38.436 "r_mbytes_per_sec": 0, 00:23:38.436 "w_mbytes_per_sec": 0 00:23:38.436 }, 00:23:38.436 "claimed": false, 00:23:38.436 "zoned": false, 00:23:38.436 "supported_io_types": { 00:23:38.436 "read": true, 00:23:38.436 "write": true, 00:23:38.436 "unmap": true, 00:23:38.436 "flush": false, 00:23:38.436 "reset": true, 00:23:38.436 "nvme_admin": false, 00:23:38.436 "nvme_io": false, 00:23:38.436 "nvme_io_md": false, 00:23:38.436 "write_zeroes": true, 00:23:38.436 "zcopy": false, 00:23:38.436 "get_zone_info": false, 00:23:38.436 "zone_management": false, 00:23:38.436 "zone_append": false, 00:23:38.436 "compare": false, 00:23:38.436 "compare_and_write": false, 00:23:38.436 "abort": false, 00:23:38.436 "seek_hole": true, 00:23:38.436 "seek_data": true, 00:23:38.436 "copy": false, 00:23:38.437 "nvme_iov_md": false 00:23:38.437 }, 00:23:38.437 "driver_specific": { 00:23:38.437 "lvol": { 00:23:38.437 "lvol_store_uuid": "5a51668e-cbea-4ddd-a7ff-815d46a3004c", 00:23:38.437 "base_bdev": "nvme0n1", 00:23:38.437 "thin_provision": true, 00:23:38.437 "num_allocated_clusters": 0, 00:23:38.437 "snapshot": false, 00:23:38.437 "clone": false, 00:23:38.437 "esnap_clone": false 00:23:38.437 } 00:23:38.437 } 00:23:38.437 } 00:23:38.437 ]' 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # bs=4096 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # nb=26476544 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bdev_size=103424 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # echo 103424 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 14c752f3-57a3-4989-b7d1-0012d7ccf7a9 --l2p_dram_limit 10' 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:23:38.437 03:50:30 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 14c752f3-57a3-4989-b7d1-0012d7ccf7a9 --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:23:38.697 [2024-10-01 03:50:31.105137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.697 [2024-10-01 03:50:31.105289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:38.697 [2024-10-01 03:50:31.105310] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:38.697 [2024-10-01 03:50:31.105318] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.697 [2024-10-01 03:50:31.105365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.697 [2024-10-01 03:50:31.105373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:38.697 [2024-10-01 03:50:31.105382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:23:38.697 [2024-10-01 03:50:31.105390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.697 [2024-10-01 03:50:31.105413] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:38.697 [2024-10-01 03:50:31.105953] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:38.697 [2024-10-01 03:50:31.105973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.697 [2024-10-01 03:50:31.105979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:38.697 [2024-10-01 03:50:31.105987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.565 ms 00:23:38.697 [2024-10-01 03:50:31.105995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.697 [2024-10-01 03:50:31.106032] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 62a17931-fd73-4654-bf05-2a16be32a316 00:23:38.697 [2024-10-01 03:50:31.107330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.697 [2024-10-01 03:50:31.107360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:38.697 [2024-10-01 03:50:31.107369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:23:38.697 [2024-10-01 03:50:31.107378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.697 [2024-10-01 03:50:31.114320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.697 [2024-10-01 03:50:31.114414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:38.697 [2024-10-01 03:50:31.114456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.898 ms 00:23:38.697 [2024-10-01 03:50:31.114475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.697 [2024-10-01 03:50:31.114622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.697 [2024-10-01 03:50:31.114648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:38.697 [2024-10-01 03:50:31.114665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:38.697 [2024-10-01 03:50:31.114687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.697 [2024-10-01 03:50:31.114761] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.697 [2024-10-01 03:50:31.114785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:38.697 [2024-10-01 03:50:31.114802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:23:38.697 [2024-10-01 03:50:31.114817] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.697 [2024-10-01 03:50:31.114843] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:38.697 [2024-10-01 03:50:31.118080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.697 [2024-10-01 03:50:31.118162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:38.697 [2024-10-01 03:50:31.118206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.239 ms 00:23:38.697 [2024-10-01 03:50:31.118223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.697 [2024-10-01 03:50:31.118261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.697 [2024-10-01 03:50:31.118276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:38.697 [2024-10-01 03:50:31.118295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:23:38.697 [2024-10-01 03:50:31.118312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.697 [2024-10-01 03:50:31.118342] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:38.697 [2024-10-01 03:50:31.118479] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:38.697 [2024-10-01 03:50:31.118626] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:38.697 [2024-10-01 03:50:31.118656] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:38.697 [2024-10-01 03:50:31.118686] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:38.697 [2024-10-01 03:50:31.118711] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:38.697 [2024-10-01 03:50:31.118736] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:38.697 [2024-10-01 03:50:31.118752] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:38.697 [2024-10-01 03:50:31.118768] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:38.697 [2024-10-01 03:50:31.118784] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:38.697 [2024-10-01 03:50:31.118801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.697 [2024-10-01 03:50:31.118860] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:38.697 [2024-10-01 03:50:31.118881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.460 ms 00:23:38.697 [2024-10-01 03:50:31.118896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.697 [2024-10-01 03:50:31.118976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.697 [2024-10-01 03:50:31.119342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:38.697 [2024-10-01 03:50:31.119441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:23:38.697 [2024-10-01 03:50:31.119459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.697 [2024-10-01 03:50:31.119560] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:38.697 [2024-10-01 03:50:31.119581] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:38.697 [2024-10-01 03:50:31.119598] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.697 [2024-10-01 03:50:31.119613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.697 [2024-10-01 03:50:31.119630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:38.697 [2024-10-01 03:50:31.119645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:38.697 [2024-10-01 03:50:31.119688] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:38.697 [2024-10-01 03:50:31.119705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:38.697 [2024-10-01 03:50:31.119721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:38.697 [2024-10-01 03:50:31.119736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.697 [2024-10-01 03:50:31.119752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:38.697 [2024-10-01 03:50:31.119793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:38.697 [2024-10-01 03:50:31.119812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:38.697 [2024-10-01 03:50:31.119826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:38.697 [2024-10-01 03:50:31.119842] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:38.697 [2024-10-01 03:50:31.119855] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.697 [2024-10-01 03:50:31.119894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:38.697 [2024-10-01 03:50:31.119911] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:38.697 [2024-10-01 03:50:31.119927] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.697 [2024-10-01 03:50:31.119942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:38.697 [2024-10-01 03:50:31.119985] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:38.697 [2024-10-01 03:50:31.120035] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.697 [2024-10-01 03:50:31.120055] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:38.697 [2024-10-01 03:50:31.120069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:38.698 [2024-10-01 03:50:31.120108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.698 [2024-10-01 03:50:31.120126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:38.698 [2024-10-01 03:50:31.120141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:38.698 [2024-10-01 03:50:31.120156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.698 [2024-10-01 03:50:31.120174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:38.698 [2024-10-01 03:50:31.120210] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:38.698 [2024-10-01 03:50:31.120230] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:38.698 [2024-10-01 03:50:31.120244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:38.698 [2024-10-01 03:50:31.120262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:38.698 [2024-10-01 03:50:31.120395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.698 [2024-10-01 03:50:31.120415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:38.698 [2024-10-01 03:50:31.120430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:38.698 [2024-10-01 03:50:31.120445] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:38.698 [2024-10-01 03:50:31.120479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:38.698 [2024-10-01 03:50:31.120489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:38.698 [2024-10-01 03:50:31.120495] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.698 [2024-10-01 03:50:31.120502] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:38.698 [2024-10-01 03:50:31.120507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:38.698 [2024-10-01 03:50:31.120515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.698 [2024-10-01 03:50:31.120521] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:38.698 [2024-10-01 03:50:31.120528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:38.698 [2024-10-01 03:50:31.120536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:38.698 [2024-10-01 03:50:31.120544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:38.698 [2024-10-01 03:50:31.120551] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:38.698 [2024-10-01 03:50:31.120560] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:38.698 [2024-10-01 03:50:31.120566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:38.698 [2024-10-01 03:50:31.120573] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:38.698 [2024-10-01 03:50:31.120578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:38.698 [2024-10-01 03:50:31.120585] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:38.698 [2024-10-01 03:50:31.120594] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:38.698 [2024-10-01 03:50:31.120604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.698 [2024-10-01 03:50:31.120611] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:38.698 [2024-10-01 03:50:31.120618] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:38.698 [2024-10-01 03:50:31.120624] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:38.698 [2024-10-01 03:50:31.120631] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:38.698 [2024-10-01 03:50:31.120636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:38.698 [2024-10-01 03:50:31.120643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:38.698 [2024-10-01 03:50:31.120649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:38.698 [2024-10-01 03:50:31.120657] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:38.698 [2024-10-01 03:50:31.120662] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:38.698 [2024-10-01 03:50:31.120671] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:38.698 [2024-10-01 03:50:31.120677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:38.698 [2024-10-01 03:50:31.120684] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:38.698 [2024-10-01 03:50:31.120690] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:38.698 [2024-10-01 03:50:31.120707] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:38.698 [2024-10-01 03:50:31.120713] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:38.698 [2024-10-01 03:50:31.120721] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:38.698 [2024-10-01 03:50:31.120728] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:38.698 [2024-10-01 03:50:31.120735] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:38.698 [2024-10-01 03:50:31.120741] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:38.698 [2024-10-01 03:50:31.120748] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:38.698 [2024-10-01 03:50:31.120754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:38.698 [2024-10-01 03:50:31.120763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:38.698 [2024-10-01 03:50:31.120769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.250 ms 00:23:38.698 [2024-10-01 03:50:31.120776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:38.698 [2024-10-01 03:50:31.120811] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:38.698 [2024-10-01 03:50:31.120823] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:42.905 [2024-10-01 03:50:34.742925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.743159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:42.905 [2024-10-01 03:50:34.743568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3622.100 ms 00:23:42.905 [2024-10-01 03:50:34.743635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:34.780204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.780405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:42.905 [2024-10-01 03:50:34.780799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.232 ms 00:23:42.905 [2024-10-01 03:50:34.780860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:34.781132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.781251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:42.905 [2024-10-01 03:50:34.781316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:23:42.905 [2024-10-01 03:50:34.781354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:34.831371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.831575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:42.905 [2024-10-01 03:50:34.831601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.948 ms 00:23:42.905 [2024-10-01 03:50:34.831617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:34.831663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.831676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:42.905 [2024-10-01 03:50:34.831686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:42.905 [2024-10-01 03:50:34.831706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:34.832462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.832511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:42.905 [2024-10-01 03:50:34.832523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.691 ms 00:23:42.905 [2024-10-01 03:50:34.832537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:34.832659] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.832673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:42.905 [2024-10-01 03:50:34.832683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:23:42.905 [2024-10-01 03:50:34.832698] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:34.852619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.852666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:42.905 [2024-10-01 03:50:34.852679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.899 ms 00:23:42.905 [2024-10-01 03:50:34.852691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:34.867742] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:42.905 [2024-10-01 03:50:34.872727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.872771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:42.905 [2024-10-01 03:50:34.872788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.926 ms 00:23:42.905 [2024-10-01 03:50:34.872798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:34.971606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.971657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:42.905 [2024-10-01 03:50:34.971676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.775 ms 00:23:42.905 [2024-10-01 03:50:34.971686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:34.971899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.971913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:42.905 [2024-10-01 03:50:34.971929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:23:42.905 [2024-10-01 03:50:34.971938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:34.998063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:34.998259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:42.905 [2024-10-01 03:50:34.998287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.069 ms 00:23:42.905 [2024-10-01 03:50:34.998297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:35.022868] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:35.022914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:42.905 [2024-10-01 03:50:35.022929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.524 ms 00:23:42.905 [2024-10-01 03:50:35.022938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:35.023566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:35.023589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:42.905 [2024-10-01 03:50:35.023602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:23:42.905 [2024-10-01 03:50:35.023611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:35.113600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:35.113780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:42.905 [2024-10-01 03:50:35.113809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 89.942 ms 00:23:42.905 [2024-10-01 03:50:35.113821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:35.142681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.905 [2024-10-01 03:50:35.142730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:42.905 [2024-10-01 03:50:35.142747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.783 ms 00:23:42.905 [2024-10-01 03:50:35.142756] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.905 [2024-10-01 03:50:35.168626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.906 [2024-10-01 03:50:35.168822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:42.906 [2024-10-01 03:50:35.168847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.814 ms 00:23:42.906 [2024-10-01 03:50:35.168857] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.906 [2024-10-01 03:50:35.194610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.906 [2024-10-01 03:50:35.194657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:42.906 [2024-10-01 03:50:35.194674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.704 ms 00:23:42.906 [2024-10-01 03:50:35.194683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.906 [2024-10-01 03:50:35.194739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.906 [2024-10-01 03:50:35.194750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:42.906 [2024-10-01 03:50:35.194769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:42.906 [2024-10-01 03:50:35.194778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.906 [2024-10-01 03:50:35.194876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:42.906 [2024-10-01 03:50:35.194887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:42.906 [2024-10-01 03:50:35.194899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:23:42.906 [2024-10-01 03:50:35.194907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:42.906 [2024-10-01 03:50:35.196362] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4090.600 ms, result 0 00:23:42.906 { 00:23:42.906 "name": "ftl0", 00:23:42.906 "uuid": "62a17931-fd73-4654-bf05-2a16be32a316" 00:23:42.906 } 00:23:42.906 03:50:35 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:23:42.906 03:50:35 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:42.906 03:50:35 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:23:42.906 03:50:35 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:23:43.480 [2024-10-01 03:50:35.727548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.727602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:43.480 [2024-10-01 03:50:35.727615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:43.480 [2024-10-01 03:50:35.727627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.727652] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:43.480 [2024-10-01 03:50:35.731100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.731142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:43.480 [2024-10-01 03:50:35.731168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.427 ms 00:23:43.480 [2024-10-01 03:50:35.731177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.731467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.731480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:43.480 [2024-10-01 03:50:35.731494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.260 ms 00:23:43.480 [2024-10-01 03:50:35.731502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.734774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.734986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:43.480 [2024-10-01 03:50:35.735028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.252 ms 00:23:43.480 [2024-10-01 03:50:35.735042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.741299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.741336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:43.480 [2024-10-01 03:50:35.741351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.228 ms 00:23:43.480 [2024-10-01 03:50:35.741359] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.761516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.761554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:43.480 [2024-10-01 03:50:35.761567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.081 ms 00:23:43.480 [2024-10-01 03:50:35.761573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.776771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.776809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:43.480 [2024-10-01 03:50:35.776823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.153 ms 00:23:43.480 [2024-10-01 03:50:35.776830] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.776959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.776971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:43.480 [2024-10-01 03:50:35.776981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.088 ms 00:23:43.480 [2024-10-01 03:50:35.776989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.796560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.796690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:43.480 [2024-10-01 03:50:35.796709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.537 ms 00:23:43.480 [2024-10-01 03:50:35.796715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.815499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.815621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:43.480 [2024-10-01 03:50:35.815638] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.752 ms 00:23:43.480 [2024-10-01 03:50:35.815644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.833480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.833514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:43.480 [2024-10-01 03:50:35.833525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.804 ms 00:23:43.480 [2024-10-01 03:50:35.833531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.851349] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.480 [2024-10-01 03:50:35.851374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:43.480 [2024-10-01 03:50:35.851383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.758 ms 00:23:43.480 [2024-10-01 03:50:35.851389] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.480 [2024-10-01 03:50:35.851420] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:43.480 [2024-10-01 03:50:35.851432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851475] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851633] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:43.480 [2024-10-01 03:50:35.851697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.851997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:43.481 [2024-10-01 03:50:35.852157] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:43.481 [2024-10-01 03:50:35.852168] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 62a17931-fd73-4654-bf05-2a16be32a316 00:23:43.481 [2024-10-01 03:50:35.852175] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:43.481 [2024-10-01 03:50:35.852184] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:43.481 [2024-10-01 03:50:35.852190] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:43.481 [2024-10-01 03:50:35.852198] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:43.481 [2024-10-01 03:50:35.852203] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:43.481 [2024-10-01 03:50:35.852211] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:43.481 [2024-10-01 03:50:35.852219] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:43.481 [2024-10-01 03:50:35.852226] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:43.481 [2024-10-01 03:50:35.852231] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:43.481 [2024-10-01 03:50:35.852238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.481 [2024-10-01 03:50:35.852244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:43.481 [2024-10-01 03:50:35.852252] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.819 ms 00:23:43.481 [2024-10-01 03:50:35.852257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.481 [2024-10-01 03:50:35.862042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.481 [2024-10-01 03:50:35.862067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:43.481 [2024-10-01 03:50:35.862077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.760 ms 00:23:43.481 [2024-10-01 03:50:35.862083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.481 [2024-10-01 03:50:35.862362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:43.481 [2024-10-01 03:50:35.862370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:43.481 [2024-10-01 03:50:35.862378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:23:43.481 [2024-10-01 03:50:35.862383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.481 [2024-10-01 03:50:35.893488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.481 [2024-10-01 03:50:35.893594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:43.481 [2024-10-01 03:50:35.893609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.481 [2024-10-01 03:50:35.893617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.481 [2024-10-01 03:50:35.893668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.481 [2024-10-01 03:50:35.893674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:43.481 [2024-10-01 03:50:35.893682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.481 [2024-10-01 03:50:35.893687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.481 [2024-10-01 03:50:35.893742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.481 [2024-10-01 03:50:35.893751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:43.482 [2024-10-01 03:50:35.893759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.482 [2024-10-01 03:50:35.893764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.482 [2024-10-01 03:50:35.893784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.482 [2024-10-01 03:50:35.893790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:43.482 [2024-10-01 03:50:35.893798] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.482 [2024-10-01 03:50:35.893803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.482 [2024-10-01 03:50:35.957129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.482 [2024-10-01 03:50:35.957164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:43.482 [2024-10-01 03:50:35.957175] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.482 [2024-10-01 03:50:35.957181] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.482 [2024-10-01 03:50:36.008664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.482 [2024-10-01 03:50:36.008819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:43.482 [2024-10-01 03:50:36.008836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.482 [2024-10-01 03:50:36.008843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.482 [2024-10-01 03:50:36.008920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.482 [2024-10-01 03:50:36.008928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:43.482 [2024-10-01 03:50:36.008936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.482 [2024-10-01 03:50:36.008942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.482 [2024-10-01 03:50:36.008999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.482 [2024-10-01 03:50:36.009029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:43.482 [2024-10-01 03:50:36.009037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.482 [2024-10-01 03:50:36.009043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.482 [2024-10-01 03:50:36.009125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.482 [2024-10-01 03:50:36.009134] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:43.482 [2024-10-01 03:50:36.009142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.482 [2024-10-01 03:50:36.009148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.482 [2024-10-01 03:50:36.009179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.482 [2024-10-01 03:50:36.009186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:43.482 [2024-10-01 03:50:36.009196] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.482 [2024-10-01 03:50:36.009202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.482 [2024-10-01 03:50:36.009239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.482 [2024-10-01 03:50:36.009246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:43.482 [2024-10-01 03:50:36.009254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.482 [2024-10-01 03:50:36.009260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.482 [2024-10-01 03:50:36.009305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:43.482 [2024-10-01 03:50:36.009315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:43.482 [2024-10-01 03:50:36.009323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:43.482 [2024-10-01 03:50:36.009329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:43.482 [2024-10-01 03:50:36.009453] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 281.873 ms, result 0 00:23:43.482 true 00:23:43.742 03:50:36 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 78625 00:23:43.742 03:50:36 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78625 ']' 00:23:43.742 03:50:36 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78625 00:23:43.742 03:50:36 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # uname 00:23:43.742 03:50:36 ftl.ftl_restore_fast -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:43.742 03:50:36 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78625 00:23:43.742 killing process with pid 78625 00:23:43.742 03:50:36 ftl.ftl_restore_fast -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:43.742 03:50:36 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:43.742 03:50:36 ftl.ftl_restore_fast -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78625' 00:23:43.742 03:50:36 ftl.ftl_restore_fast -- common/autotest_common.sh@969 -- # kill 78625 00:23:43.742 03:50:36 ftl.ftl_restore_fast -- common/autotest_common.sh@974 -- # wait 78625 00:23:50.379 03:50:41 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:23:53.684 262144+0 records in 00:23:53.684 262144+0 records out 00:23:53.684 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 3.79186 s, 283 MB/s 00:23:53.684 03:50:45 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:55.600 03:50:47 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:55.600 [2024-10-01 03:50:47.944202] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:23:55.600 [2024-10-01 03:50:47.944322] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78847 ] 00:23:55.600 [2024-10-01 03:50:48.094907] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.861 [2024-10-01 03:50:48.331300] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.126 [2024-10-01 03:50:48.621734] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:56.126 [2024-10-01 03:50:48.621827] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:56.390 [2024-10-01 03:50:48.783767] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.390 [2024-10-01 03:50:48.783839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:56.390 [2024-10-01 03:50:48.783855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:56.390 [2024-10-01 03:50:48.783869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.390 [2024-10-01 03:50:48.783928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.390 [2024-10-01 03:50:48.783939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:56.390 [2024-10-01 03:50:48.783949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:23:56.390 [2024-10-01 03:50:48.783958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.390 [2024-10-01 03:50:48.783980] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:56.390 [2024-10-01 03:50:48.784747] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:56.390 [2024-10-01 03:50:48.784785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.390 [2024-10-01 03:50:48.784794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:56.390 [2024-10-01 03:50:48.784803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.810 ms 00:23:56.390 [2024-10-01 03:50:48.784812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.390 [2024-10-01 03:50:48.786633] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:56.390 [2024-10-01 03:50:48.801297] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.390 [2024-10-01 03:50:48.801349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:56.390 [2024-10-01 03:50:48.801365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.667 ms 00:23:56.390 [2024-10-01 03:50:48.801373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.390 [2024-10-01 03:50:48.801460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.390 [2024-10-01 03:50:48.801471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:56.390 [2024-10-01 03:50:48.801480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:23:56.390 [2024-10-01 03:50:48.801489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.390 [2024-10-01 03:50:48.810147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.390 [2024-10-01 03:50:48.810191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:56.390 [2024-10-01 03:50:48.810202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.575 ms 00:23:56.390 [2024-10-01 03:50:48.810211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.390 [2024-10-01 03:50:48.810299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.390 [2024-10-01 03:50:48.810321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:56.390 [2024-10-01 03:50:48.810330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:23:56.390 [2024-10-01 03:50:48.810338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.390 [2024-10-01 03:50:48.810388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.390 [2024-10-01 03:50:48.810399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:56.390 [2024-10-01 03:50:48.810408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:56.390 [2024-10-01 03:50:48.810416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.390 [2024-10-01 03:50:48.810440] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:56.391 [2024-10-01 03:50:48.814465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.391 [2024-10-01 03:50:48.814677] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:56.391 [2024-10-01 03:50:48.814700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.031 ms 00:23:56.391 [2024-10-01 03:50:48.814710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.391 [2024-10-01 03:50:48.814752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.391 [2024-10-01 03:50:48.814761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:56.391 [2024-10-01 03:50:48.814770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:56.391 [2024-10-01 03:50:48.814784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.391 [2024-10-01 03:50:48.814840] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:56.391 [2024-10-01 03:50:48.814865] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:56.391 [2024-10-01 03:50:48.814904] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:56.391 [2024-10-01 03:50:48.814920] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:56.391 [2024-10-01 03:50:48.815051] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:56.391 [2024-10-01 03:50:48.815064] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:56.391 [2024-10-01 03:50:48.815079] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:56.391 [2024-10-01 03:50:48.815090] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:56.391 [2024-10-01 03:50:48.815099] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:56.391 [2024-10-01 03:50:48.815107] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:56.391 [2024-10-01 03:50:48.815115] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:56.391 [2024-10-01 03:50:48.815124] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:56.391 [2024-10-01 03:50:48.815133] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:56.391 [2024-10-01 03:50:48.815142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.391 [2024-10-01 03:50:48.815149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:56.391 [2024-10-01 03:50:48.815157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.305 ms 00:23:56.391 [2024-10-01 03:50:48.815164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.391 [2024-10-01 03:50:48.815253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.391 [2024-10-01 03:50:48.815262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:56.391 [2024-10-01 03:50:48.815270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:56.391 [2024-10-01 03:50:48.815277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.391 [2024-10-01 03:50:48.815383] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:56.391 [2024-10-01 03:50:48.815394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:56.391 [2024-10-01 03:50:48.815403] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.391 [2024-10-01 03:50:48.815411] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815420] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:56.391 [2024-10-01 03:50:48.815428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:56.391 [2024-10-01 03:50:48.815442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:56.391 [2024-10-01 03:50:48.815450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.391 [2024-10-01 03:50:48.815466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:56.391 [2024-10-01 03:50:48.815473] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:56.391 [2024-10-01 03:50:48.815480] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:56.391 [2024-10-01 03:50:48.815500] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:56.391 [2024-10-01 03:50:48.815507] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:56.391 [2024-10-01 03:50:48.815515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:56.391 [2024-10-01 03:50:48.815529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:56.391 [2024-10-01 03:50:48.815536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815543] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:56.391 [2024-10-01 03:50:48.815550] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815557] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.391 [2024-10-01 03:50:48.815563] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:56.391 [2024-10-01 03:50:48.815571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815577] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.391 [2024-10-01 03:50:48.815584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:56.391 [2024-10-01 03:50:48.815591] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.391 [2024-10-01 03:50:48.815605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:56.391 [2024-10-01 03:50:48.815611] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815618] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:56.391 [2024-10-01 03:50:48.815625] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:56.391 [2024-10-01 03:50:48.815632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815639] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.391 [2024-10-01 03:50:48.815646] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:56.391 [2024-10-01 03:50:48.815652] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:56.391 [2024-10-01 03:50:48.815659] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:56.391 [2024-10-01 03:50:48.815666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:56.391 [2024-10-01 03:50:48.815672] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:56.391 [2024-10-01 03:50:48.815679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:56.391 [2024-10-01 03:50:48.815692] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:56.391 [2024-10-01 03:50:48.815700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815707] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:56.391 [2024-10-01 03:50:48.815718] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:56.391 [2024-10-01 03:50:48.815729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:56.391 [2024-10-01 03:50:48.815737] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:56.391 [2024-10-01 03:50:48.815744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:56.391 [2024-10-01 03:50:48.815752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:56.391 [2024-10-01 03:50:48.815760] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:56.391 [2024-10-01 03:50:48.815767] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:56.391 [2024-10-01 03:50:48.815774] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:56.391 [2024-10-01 03:50:48.815780] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:56.391 [2024-10-01 03:50:48.815789] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:56.391 [2024-10-01 03:50:48.815799] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.391 [2024-10-01 03:50:48.815807] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:56.391 [2024-10-01 03:50:48.815815] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:56.391 [2024-10-01 03:50:48.815822] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:56.391 [2024-10-01 03:50:48.815830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:56.391 [2024-10-01 03:50:48.815837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:56.392 [2024-10-01 03:50:48.815844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:56.392 [2024-10-01 03:50:48.815851] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:56.392 [2024-10-01 03:50:48.815858] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:56.392 [2024-10-01 03:50:48.815865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:56.392 [2024-10-01 03:50:48.815873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:56.392 [2024-10-01 03:50:48.815880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:56.392 [2024-10-01 03:50:48.815888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:56.392 [2024-10-01 03:50:48.815895] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:56.392 [2024-10-01 03:50:48.815902] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:56.392 [2024-10-01 03:50:48.815909] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:56.392 [2024-10-01 03:50:48.815918] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:56.392 [2024-10-01 03:50:48.815927] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:56.392 [2024-10-01 03:50:48.815935] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:56.392 [2024-10-01 03:50:48.815942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:56.392 [2024-10-01 03:50:48.815949] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:56.392 [2024-10-01 03:50:48.815956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.392 [2024-10-01 03:50:48.815964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:56.392 [2024-10-01 03:50:48.815977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.642 ms 00:23:56.392 [2024-10-01 03:50:48.815984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.392 [2024-10-01 03:50:48.867628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.392 [2024-10-01 03:50:48.867693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:56.392 [2024-10-01 03:50:48.867707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.565 ms 00:23:56.392 [2024-10-01 03:50:48.867720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.392 [2024-10-01 03:50:48.867817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.392 [2024-10-01 03:50:48.867827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:56.392 [2024-10-01 03:50:48.867837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:56.392 [2024-10-01 03:50:48.867844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.392 [2024-10-01 03:50:48.903222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.392 [2024-10-01 03:50:48.903437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:56.392 [2024-10-01 03:50:48.903459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.306 ms 00:23:56.392 [2024-10-01 03:50:48.903468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.392 [2024-10-01 03:50:48.903512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.392 [2024-10-01 03:50:48.903521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:56.392 [2024-10-01 03:50:48.903530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:23:56.392 [2024-10-01 03:50:48.903538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.392 [2024-10-01 03:50:48.904175] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.392 [2024-10-01 03:50:48.904225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:56.392 [2024-10-01 03:50:48.904243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.558 ms 00:23:56.392 [2024-10-01 03:50:48.904251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.392 [2024-10-01 03:50:48.904412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.392 [2024-10-01 03:50:48.904422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:56.392 [2024-10-01 03:50:48.904431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.128 ms 00:23:56.392 [2024-10-01 03:50:48.904439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.392 [2024-10-01 03:50:48.919396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.392 [2024-10-01 03:50:48.919443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:56.392 [2024-10-01 03:50:48.919455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.937 ms 00:23:56.392 [2024-10-01 03:50:48.919462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.392 [2024-10-01 03:50:48.934183] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:23:56.392 [2024-10-01 03:50:48.934406] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:56.392 [2024-10-01 03:50:48.934429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.392 [2024-10-01 03:50:48.934437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:56.392 [2024-10-01 03:50:48.934447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.850 ms 00:23:56.392 [2024-10-01 03:50:48.934456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.655 [2024-10-01 03:50:48.960664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:48.960730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:56.656 [2024-10-01 03:50:48.960744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.161 ms 00:23:56.656 [2024-10-01 03:50:48.960752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:48.974009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:48.974056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:56.656 [2024-10-01 03:50:48.974069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.189 ms 00:23:56.656 [2024-10-01 03:50:48.974077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:48.987343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:48.987396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:56.656 [2024-10-01 03:50:48.987408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.213 ms 00:23:56.656 [2024-10-01 03:50:48.987416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:48.988088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:48.988115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:56.656 [2024-10-01 03:50:48.988127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.552 ms 00:23:56.656 [2024-10-01 03:50:48.988135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:49.054327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:49.054398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:56.656 [2024-10-01 03:50:49.054414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.171 ms 00:23:56.656 [2024-10-01 03:50:49.054424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:49.066183] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:56.656 [2024-10-01 03:50:49.069478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:49.069675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:56.656 [2024-10-01 03:50:49.069695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.988 ms 00:23:56.656 [2024-10-01 03:50:49.069712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:49.069807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:49.069819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:56.656 [2024-10-01 03:50:49.069829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:23:56.656 [2024-10-01 03:50:49.069838] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:49.069909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:49.069921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:56.656 [2024-10-01 03:50:49.069930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:56.656 [2024-10-01 03:50:49.069938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:49.069963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:49.069972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:56.656 [2024-10-01 03:50:49.069981] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:56.656 [2024-10-01 03:50:49.069989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:49.070051] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:56.656 [2024-10-01 03:50:49.070063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:49.070073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:56.656 [2024-10-01 03:50:49.070086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:56.656 [2024-10-01 03:50:49.070094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:49.096575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:49.096774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:56.656 [2024-10-01 03:50:49.096799] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.461 ms 00:23:56.656 [2024-10-01 03:50:49.096808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:49.096898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:56.656 [2024-10-01 03:50:49.096910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:56.656 [2024-10-01 03:50:49.096920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:23:56.656 [2024-10-01 03:50:49.096929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:56.656 [2024-10-01 03:50:49.099140] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 314.847 ms, result 0 00:25:00.847  Copying: 15/1024 [MB] (15 MBps) Copying: 30/1024 [MB] (14 MBps) Copying: 47/1024 [MB] (17 MBps) Copying: 69/1024 [MB] (21 MBps) Copying: 86/1024 [MB] (16 MBps) Copying: 107/1024 [MB] (21 MBps) Copying: 121/1024 [MB] (14 MBps) Copying: 132/1024 [MB] (10 MBps) Copying: 142/1024 [MB] (10 MBps) Copying: 152/1024 [MB] (10 MBps) Copying: 192/1024 [MB] (40 MBps) Copying: 210/1024 [MB] (17 MBps) Copying: 220/1024 [MB] (10 MBps) Copying: 231/1024 [MB] (10 MBps) Copying: 241/1024 [MB] (10 MBps) Copying: 282/1024 [MB] (40 MBps) Copying: 314/1024 [MB] (32 MBps) Copying: 351/1024 [MB] (36 MBps) Copying: 375/1024 [MB] (24 MBps) Copying: 394/1024 [MB] (18 MBps) Copying: 413/1024 [MB] (19 MBps) Copying: 427/1024 [MB] (13 MBps) Copying: 447864/1048576 [kB] (10204 kBps) Copying: 447/1024 [MB] (10 MBps) Copying: 475/1024 [MB] (27 MBps) Copying: 485/1024 [MB] (10 MBps) Copying: 499/1024 [MB] (13 MBps) Copying: 515/1024 [MB] (16 MBps) Copying: 532/1024 [MB] (17 MBps) Copying: 546/1024 [MB] (13 MBps) Copying: 563/1024 [MB] (16 MBps) Copying: 581/1024 [MB] (17 MBps) Copying: 591/1024 [MB] (10 MBps) Copying: 601/1024 [MB] (10 MBps) Copying: 618/1024 [MB] (16 MBps) Copying: 631/1024 [MB] (13 MBps) Copying: 650/1024 [MB] (19 MBps) Copying: 668/1024 [MB] (18 MBps) Copying: 689/1024 [MB] (20 MBps) Copying: 707/1024 [MB] (17 MBps) Copying: 718/1024 [MB] (11 MBps) Copying: 728/1024 [MB] (10 MBps) Copying: 740/1024 [MB] (12 MBps) Copying: 764/1024 [MB] (23 MBps) Copying: 775/1024 [MB] (11 MBps) Copying: 787/1024 [MB] (12 MBps) Copying: 807/1024 [MB] (19 MBps) Copying: 821/1024 [MB] (13 MBps) Copying: 835/1024 [MB] (13 MBps) Copying: 850/1024 [MB] (15 MBps) Copying: 865/1024 [MB] (14 MBps) Copying: 875/1024 [MB] (10 MBps) Copying: 889/1024 [MB] (14 MBps) Copying: 903/1024 [MB] (13 MBps) Copying: 921/1024 [MB] (18 MBps) Copying: 933/1024 [MB] (12 MBps) Copying: 945/1024 [MB] (11 MBps) Copying: 959/1024 [MB] (14 MBps) Copying: 972/1024 [MB] (12 MBps) Copying: 982/1024 [MB] (10 MBps) Copying: 992/1024 [MB] (10 MBps) Copying: 1002/1024 [MB] (10 MBps) Copying: 1014/1024 [MB] (12 MBps) Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-01 03:51:53.035290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.847 [2024-10-01 03:51:53.035362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:00.847 [2024-10-01 03:51:53.035382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:00.847 [2024-10-01 03:51:53.035402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.847 [2024-10-01 03:51:53.035428] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:00.847 [2024-10-01 03:51:53.038906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.847 [2024-10-01 03:51:53.038957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:00.847 [2024-10-01 03:51:53.038970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.458 ms 00:25:00.847 [2024-10-01 03:51:53.038980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.847 [2024-10-01 03:51:53.042078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.847 [2024-10-01 03:51:53.042130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:00.847 [2024-10-01 03:51:53.042143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.054 ms 00:25:00.847 [2024-10-01 03:51:53.042168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.847 [2024-10-01 03:51:53.042209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.847 [2024-10-01 03:51:53.042220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:25:00.847 [2024-10-01 03:51:53.042230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:00.847 [2024-10-01 03:51:53.042239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.847 [2024-10-01 03:51:53.042307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.847 [2024-10-01 03:51:53.042320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:25:00.847 [2024-10-01 03:51:53.042329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:00.847 [2024-10-01 03:51:53.042338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.847 [2024-10-01 03:51:53.042354] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:00.847 [2024-10-01 03:51:53.042371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:00.847 [2024-10-01 03:51:53.042520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042776] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.042998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:00.848 [2024-10-01 03:51:53.043489] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:00.848 [2024-10-01 03:51:53.043500] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 62a17931-fd73-4654-bf05-2a16be32a316 00:25:00.848 [2024-10-01 03:51:53.043520] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:25:00.848 [2024-10-01 03:51:53.043530] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:25:00.848 [2024-10-01 03:51:53.043539] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:25:00.848 [2024-10-01 03:51:53.043548] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:25:00.848 [2024-10-01 03:51:53.043557] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:00.848 [2024-10-01 03:51:53.043566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:00.848 [2024-10-01 03:51:53.043575] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:00.848 [2024-10-01 03:51:53.043582] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:00.848 [2024-10-01 03:51:53.043589] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:00.848 [2024-10-01 03:51:53.043596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.848 [2024-10-01 03:51:53.043605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:00.849 [2024-10-01 03:51:53.043619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.244 ms 00:25:00.849 [2024-10-01 03:51:53.043627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.058803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.849 [2024-10-01 03:51:53.058859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:00.849 [2024-10-01 03:51:53.058872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.158 ms 00:25:00.849 [2024-10-01 03:51:53.058882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.059334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:00.849 [2024-10-01 03:51:53.059371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:00.849 [2024-10-01 03:51:53.059382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.425 ms 00:25:00.849 [2024-10-01 03:51:53.059391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.093866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.094132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:00.849 [2024-10-01 03:51:53.094182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.094193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.094279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.094298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:00.849 [2024-10-01 03:51:53.094308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.094319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.094386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.094398] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:00.849 [2024-10-01 03:51:53.094414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.094423] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.094441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.094450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:00.849 [2024-10-01 03:51:53.094463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.094473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.186268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.186345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:00.849 [2024-10-01 03:51:53.186361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.186371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.261993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.262078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:00.849 [2024-10-01 03:51:53.262099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.262109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.262232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.262246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:00.849 [2024-10-01 03:51:53.262256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.262267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.262312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.262324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:00.849 [2024-10-01 03:51:53.262333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.262349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.262444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.262458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:00.849 [2024-10-01 03:51:53.262469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.262478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.262513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.262525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:00.849 [2024-10-01 03:51:53.262534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.262542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.262607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.262620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:00.849 [2024-10-01 03:51:53.262630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.262639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.262695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:00.849 [2024-10-01 03:51:53.262709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:00.849 [2024-10-01 03:51:53.262717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:00.849 [2024-10-01 03:51:53.262732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:00.849 [2024-10-01 03:51:53.262899] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 227.559 ms, result 0 00:25:02.232 00:25:02.232 00:25:02.232 03:51:54 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:25:02.232 [2024-10-01 03:51:54.447515] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:25:02.232 [2024-10-01 03:51:54.447663] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79531 ] 00:25:02.232 [2024-10-01 03:51:54.601752] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.494 [2024-10-01 03:51:54.866092] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:25:02.754 [2024-10-01 03:51:55.199696] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:02.754 [2024-10-01 03:51:55.199786] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:03.016 [2024-10-01 03:51:55.364854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.016 [2024-10-01 03:51:55.364922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:03.016 [2024-10-01 03:51:55.364940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:03.016 [2024-10-01 03:51:55.364955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.016 [2024-10-01 03:51:55.365039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.016 [2024-10-01 03:51:55.365052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:03.016 [2024-10-01 03:51:55.365061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:03.016 [2024-10-01 03:51:55.365070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.016 [2024-10-01 03:51:55.365096] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:03.016 [2024-10-01 03:51:55.365842] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:03.016 [2024-10-01 03:51:55.365881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.016 [2024-10-01 03:51:55.365891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:03.016 [2024-10-01 03:51:55.365902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:25:03.016 [2024-10-01 03:51:55.365911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.016 [2024-10-01 03:51:55.366304] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:25:03.016 [2024-10-01 03:51:55.366339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.016 [2024-10-01 03:51:55.366351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:03.016 [2024-10-01 03:51:55.366362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:25:03.016 [2024-10-01 03:51:55.366370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.016 [2024-10-01 03:51:55.366434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.016 [2024-10-01 03:51:55.366446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:03.016 [2024-10-01 03:51:55.366455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:25:03.016 [2024-10-01 03:51:55.366466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.016 [2024-10-01 03:51:55.366752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.016 [2024-10-01 03:51:55.366767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:03.016 [2024-10-01 03:51:55.366776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:25:03.016 [2024-10-01 03:51:55.366785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.016 [2024-10-01 03:51:55.366863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.016 [2024-10-01 03:51:55.366874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:03.016 [2024-10-01 03:51:55.366886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:25:03.016 [2024-10-01 03:51:55.366894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.016 [2024-10-01 03:51:55.366920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.016 [2024-10-01 03:51:55.366932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:03.016 [2024-10-01 03:51:55.366942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:03.016 [2024-10-01 03:51:55.366951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.016 [2024-10-01 03:51:55.366975] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:03.016 [2024-10-01 03:51:55.371997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.016 [2024-10-01 03:51:55.372052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:03.016 [2024-10-01 03:51:55.372065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.028 ms 00:25:03.016 [2024-10-01 03:51:55.372074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.017 [2024-10-01 03:51:55.372114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.017 [2024-10-01 03:51:55.372124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:03.017 [2024-10-01 03:51:55.372139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:03.017 [2024-10-01 03:51:55.372149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.017 [2024-10-01 03:51:55.372210] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:03.017 [2024-10-01 03:51:55.372238] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:03.017 [2024-10-01 03:51:55.372280] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:03.017 [2024-10-01 03:51:55.372296] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:03.017 [2024-10-01 03:51:55.372410] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:03.017 [2024-10-01 03:51:55.372427] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:03.017 [2024-10-01 03:51:55.372440] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:03.017 [2024-10-01 03:51:55.372453] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:03.017 [2024-10-01 03:51:55.372466] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:03.017 [2024-10-01 03:51:55.372477] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:03.017 [2024-10-01 03:51:55.372486] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:03.017 [2024-10-01 03:51:55.372494] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:03.017 [2024-10-01 03:51:55.372503] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:03.017 [2024-10-01 03:51:55.372514] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.017 [2024-10-01 03:51:55.372523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:03.017 [2024-10-01 03:51:55.372531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.308 ms 00:25:03.017 [2024-10-01 03:51:55.372542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.017 [2024-10-01 03:51:55.372631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.017 [2024-10-01 03:51:55.372650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:03.017 [2024-10-01 03:51:55.372659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:03.017 [2024-10-01 03:51:55.372668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.017 [2024-10-01 03:51:55.372775] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:03.017 [2024-10-01 03:51:55.372788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:03.017 [2024-10-01 03:51:55.372797] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.017 [2024-10-01 03:51:55.372805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.017 [2024-10-01 03:51:55.372817] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:03.017 [2024-10-01 03:51:55.372825] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:03.017 [2024-10-01 03:51:55.372833] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:03.017 [2024-10-01 03:51:55.372841] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:03.017 [2024-10-01 03:51:55.372848] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:03.017 [2024-10-01 03:51:55.372857] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.017 [2024-10-01 03:51:55.372866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:03.017 [2024-10-01 03:51:55.372874] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:03.017 [2024-10-01 03:51:55.372885] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:03.017 [2024-10-01 03:51:55.372892] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:03.017 [2024-10-01 03:51:55.372900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:03.017 [2024-10-01 03:51:55.372914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.017 [2024-10-01 03:51:55.372922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:03.017 [2024-10-01 03:51:55.372931] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:03.017 [2024-10-01 03:51:55.372938] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.017 [2024-10-01 03:51:55.372946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:03.017 [2024-10-01 03:51:55.372953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:03.017 [2024-10-01 03:51:55.372960] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.017 [2024-10-01 03:51:55.372967] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:03.017 [2024-10-01 03:51:55.372974] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:03.017 [2024-10-01 03:51:55.372980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.017 [2024-10-01 03:51:55.372988] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:03.017 [2024-10-01 03:51:55.372995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:03.017 [2024-10-01 03:51:55.373032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.017 [2024-10-01 03:51:55.373040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:03.017 [2024-10-01 03:51:55.373049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:03.017 [2024-10-01 03:51:55.373056] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:03.017 [2024-10-01 03:51:55.373064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:03.017 [2024-10-01 03:51:55.373071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:03.017 [2024-10-01 03:51:55.373080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.017 [2024-10-01 03:51:55.373088] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:03.017 [2024-10-01 03:51:55.373096] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:03.017 [2024-10-01 03:51:55.373103] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:03.017 [2024-10-01 03:51:55.373111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:03.017 [2024-10-01 03:51:55.373118] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:03.017 [2024-10-01 03:51:55.373127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.017 [2024-10-01 03:51:55.373135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:03.017 [2024-10-01 03:51:55.373142] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:03.017 [2024-10-01 03:51:55.373149] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.017 [2024-10-01 03:51:55.373156] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:03.017 [2024-10-01 03:51:55.373168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:03.017 [2024-10-01 03:51:55.373177] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:03.017 [2024-10-01 03:51:55.373187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:03.017 [2024-10-01 03:51:55.373196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:03.017 [2024-10-01 03:51:55.373203] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:03.017 [2024-10-01 03:51:55.373210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:03.017 [2024-10-01 03:51:55.373217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:03.017 [2024-10-01 03:51:55.373224] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:03.017 [2024-10-01 03:51:55.373231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:03.017 [2024-10-01 03:51:55.373239] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:03.017 [2024-10-01 03:51:55.373249] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.017 [2024-10-01 03:51:55.373258] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:03.017 [2024-10-01 03:51:55.373266] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:03.017 [2024-10-01 03:51:55.373273] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:03.017 [2024-10-01 03:51:55.373281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:03.017 [2024-10-01 03:51:55.373289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:03.017 [2024-10-01 03:51:55.373305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:03.017 [2024-10-01 03:51:55.373313] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:03.017 [2024-10-01 03:51:55.373320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:03.017 [2024-10-01 03:51:55.373333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:03.017 [2024-10-01 03:51:55.373342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:03.017 [2024-10-01 03:51:55.373350] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:03.017 [2024-10-01 03:51:55.373357] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:03.017 [2024-10-01 03:51:55.373365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:03.017 [2024-10-01 03:51:55.373379] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:03.017 [2024-10-01 03:51:55.373387] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:03.017 [2024-10-01 03:51:55.373401] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:03.017 [2024-10-01 03:51:55.373411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:03.017 [2024-10-01 03:51:55.373418] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:03.017 [2024-10-01 03:51:55.373427] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:03.018 [2024-10-01 03:51:55.373441] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:03.018 [2024-10-01 03:51:55.373449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.373458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:03.018 [2024-10-01 03:51:55.373477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.742 ms 00:25:03.018 [2024-10-01 03:51:55.373485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.414752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.414996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:03.018 [2024-10-01 03:51:55.415309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.220 ms 00:25:03.018 [2024-10-01 03:51:55.415356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.415478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.415583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:03.018 [2024-10-01 03:51:55.415684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:25:03.018 [2024-10-01 03:51:55.415710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.455688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.455877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:03.018 [2024-10-01 03:51:55.455941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.888 ms 00:25:03.018 [2024-10-01 03:51:55.455966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.456039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.456065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:03.018 [2024-10-01 03:51:55.456087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:03.018 [2024-10-01 03:51:55.456109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.456255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.456409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:03.018 [2024-10-01 03:51:55.456433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.073 ms 00:25:03.018 [2024-10-01 03:51:55.456455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.456731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.456756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:03.018 [2024-10-01 03:51:55.456776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.124 ms 00:25:03.018 [2024-10-01 03:51:55.456868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.473772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.473952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:03.018 [2024-10-01 03:51:55.474025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.810 ms 00:25:03.018 [2024-10-01 03:51:55.474049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.474259] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:25:03.018 [2024-10-01 03:51:55.474308] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:03.018 [2024-10-01 03:51:55.474342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.474364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:03.018 [2024-10-01 03:51:55.474385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:25:03.018 [2024-10-01 03:51:55.474482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.486807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.486969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:03.018 [2024-10-01 03:51:55.487057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.280 ms 00:25:03.018 [2024-10-01 03:51:55.487092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.487250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.487392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:03.018 [2024-10-01 03:51:55.487462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.114 ms 00:25:03.018 [2024-10-01 03:51:55.487487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.487557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.487584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:03.018 [2024-10-01 03:51:55.487605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:25:03.018 [2024-10-01 03:51:55.487624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.488296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.488465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:03.018 [2024-10-01 03:51:55.488530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.606 ms 00:25:03.018 [2024-10-01 03:51:55.488554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.488618] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:25:03.018 [2024-10-01 03:51:55.488658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.488848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:03.018 [2024-10-01 03:51:55.488897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:25:03.018 [2024-10-01 03:51:55.488980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.503614] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:03.018 [2024-10-01 03:51:55.503915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.503956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:03.018 [2024-10-01 03:51:55.504082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.829 ms 00:25:03.018 [2024-10-01 03:51:55.504108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.506497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.506664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:03.018 [2024-10-01 03:51:55.506725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.299 ms 00:25:03.018 [2024-10-01 03:51:55.506750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.506870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.506907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:03.018 [2024-10-01 03:51:55.506929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:25:03.018 [2024-10-01 03:51:55.507053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.507093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.507105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:03.018 [2024-10-01 03:51:55.507115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:03.018 [2024-10-01 03:51:55.507124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.507168] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:03.018 [2024-10-01 03:51:55.507182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.507192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:03.018 [2024-10-01 03:51:55.507204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:25:03.018 [2024-10-01 03:51:55.507213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.535976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.536218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:03.018 [2024-10-01 03:51:55.536244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.742 ms 00:25:03.018 [2024-10-01 03:51:55.536255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.536344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:03.018 [2024-10-01 03:51:55.536363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:03.018 [2024-10-01 03:51:55.536373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:03.018 [2024-10-01 03:51:55.536383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:03.018 [2024-10-01 03:51:55.537828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 172.450 ms, result 0 00:26:12.482  Copying: 10/1024 [MB] (10 MBps) Copying: 23/1024 [MB] (12 MBps) Copying: 35/1024 [MB] (12 MBps) Copying: 48/1024 [MB] (12 MBps) Copying: 60/1024 [MB] (12 MBps) Copying: 71/1024 [MB] (11 MBps) Copying: 83/1024 [MB] (12 MBps) Copying: 95/1024 [MB] (11 MBps) Copying: 107/1024 [MB] (11 MBps) Copying: 118/1024 [MB] (11 MBps) Copying: 129/1024 [MB] (10 MBps) Copying: 139/1024 [MB] (10 MBps) Copying: 150/1024 [MB] (10 MBps) Copying: 160/1024 [MB] (10 MBps) Copying: 171/1024 [MB] (10 MBps) Copying: 192/1024 [MB] (21 MBps) Copying: 212/1024 [MB] (19 MBps) Copying: 234/1024 [MB] (22 MBps) Copying: 255/1024 [MB] (21 MBps) Copying: 275/1024 [MB] (20 MBps) Copying: 297/1024 [MB] (21 MBps) Copying: 318/1024 [MB] (20 MBps) Copying: 332/1024 [MB] (13 MBps) Copying: 357/1024 [MB] (25 MBps) Copying: 381/1024 [MB] (23 MBps) Copying: 401/1024 [MB] (20 MBps) Copying: 423/1024 [MB] (21 MBps) Copying: 441/1024 [MB] (17 MBps) Copying: 451/1024 [MB] (10 MBps) Copying: 462/1024 [MB] (10 MBps) Copying: 472/1024 [MB] (10 MBps) Copying: 483/1024 [MB] (10 MBps) Copying: 494/1024 [MB] (10 MBps) Copying: 514/1024 [MB] (20 MBps) Copying: 531/1024 [MB] (16 MBps) Copying: 548/1024 [MB] (16 MBps) Copying: 560/1024 [MB] (12 MBps) Copying: 571/1024 [MB] (11 MBps) Copying: 585/1024 [MB] (13 MBps) Copying: 599/1024 [MB] (13 MBps) Copying: 616/1024 [MB] (17 MBps) Copying: 636/1024 [MB] (19 MBps) Copying: 656/1024 [MB] (20 MBps) Copying: 675/1024 [MB] (18 MBps) Copying: 695/1024 [MB] (19 MBps) Copying: 713/1024 [MB] (18 MBps) Copying: 728/1024 [MB] (14 MBps) Copying: 745/1024 [MB] (16 MBps) Copying: 763/1024 [MB] (17 MBps) Copying: 774/1024 [MB] (11 MBps) Copying: 784/1024 [MB] (10 MBps) Copying: 797/1024 [MB] (13 MBps) Copying: 809/1024 [MB] (11 MBps) Copying: 819/1024 [MB] (10 MBps) Copying: 832/1024 [MB] (12 MBps) Copying: 847/1024 [MB] (14 MBps) Copying: 861/1024 [MB] (13 MBps) Copying: 873/1024 [MB] (12 MBps) Copying: 888/1024 [MB] (14 MBps) Copying: 910/1024 [MB] (22 MBps) Copying: 921/1024 [MB] (11 MBps) Copying: 932/1024 [MB] (10 MBps) Copying: 943/1024 [MB] (10 MBps) Copying: 953/1024 [MB] (10 MBps) Copying: 964/1024 [MB] (10 MBps) Copying: 975/1024 [MB] (10 MBps) Copying: 986/1024 [MB] (11 MBps) Copying: 1004/1024 [MB] (18 MBps) Copying: 1022/1024 [MB] (17 MBps) Copying: 1024/1024 [MB] (average 14 MBps)[2024-10-01 03:53:04.910179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.482 [2024-10-01 03:53:04.910485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:12.482 [2024-10-01 03:53:04.910593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:12.482 [2024-10-01 03:53:04.910631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.482 [2024-10-01 03:53:04.910691] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:12.482 [2024-10-01 03:53:04.916011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.482 [2024-10-01 03:53:04.916199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:12.482 [2024-10-01 03:53:04.916286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.182 ms 00:26:12.482 [2024-10-01 03:53:04.916321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.482 [2024-10-01 03:53:04.916661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.482 [2024-10-01 03:53:04.916712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:12.482 [2024-10-01 03:53:04.916743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.288 ms 00:26:12.482 [2024-10-01 03:53:04.916828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.482 [2024-10-01 03:53:04.916892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.482 [2024-10-01 03:53:04.916979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:26:12.482 [2024-10-01 03:53:04.917034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:12.482 [2024-10-01 03:53:04.917049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.482 [2024-10-01 03:53:04.917127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.482 [2024-10-01 03:53:04.917143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:26:12.482 [2024-10-01 03:53:04.917160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:26:12.482 [2024-10-01 03:53:04.917170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.482 [2024-10-01 03:53:04.917190] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:12.482 [2024-10-01 03:53:04.917208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:12.482 [2024-10-01 03:53:04.917465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917857] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.917979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918205] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918228] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:12.483 [2024-10-01 03:53:04.918368] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:12.483 [2024-10-01 03:53:04.918388] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 62a17931-fd73-4654-bf05-2a16be32a316 00:26:12.483 [2024-10-01 03:53:04.918400] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:12.483 [2024-10-01 03:53:04.918412] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:26:12.483 [2024-10-01 03:53:04.918422] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:12.483 [2024-10-01 03:53:04.918433] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:12.483 [2024-10-01 03:53:04.918444] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:12.483 [2024-10-01 03:53:04.918459] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:12.483 [2024-10-01 03:53:04.918471] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:12.483 [2024-10-01 03:53:04.918480] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:12.483 [2024-10-01 03:53:04.918489] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:12.483 [2024-10-01 03:53:04.918499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.483 [2024-10-01 03:53:04.918510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:12.483 [2024-10-01 03:53:04.918522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.310 ms 00:26:12.483 [2024-10-01 03:53:04.918533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.483 [2024-10-01 03:53:04.932812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.483 [2024-10-01 03:53:04.932858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:12.484 [2024-10-01 03:53:04.932871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.259 ms 00:26:12.484 [2024-10-01 03:53:04.932886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.484 [2024-10-01 03:53:04.933294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:12.484 [2024-10-01 03:53:04.933313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:12.484 [2024-10-01 03:53:04.933324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.386 ms 00:26:12.484 [2024-10-01 03:53:04.933331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.484 [2024-10-01 03:53:04.964818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.484 [2024-10-01 03:53:04.964867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:12.484 [2024-10-01 03:53:04.964879] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.484 [2024-10-01 03:53:04.964893] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.484 [2024-10-01 03:53:04.964959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.484 [2024-10-01 03:53:04.964969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:12.484 [2024-10-01 03:53:04.964979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.484 [2024-10-01 03:53:04.964988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.484 [2024-10-01 03:53:04.965087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.484 [2024-10-01 03:53:04.965099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:12.484 [2024-10-01 03:53:04.965109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.484 [2024-10-01 03:53:04.965118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.484 [2024-10-01 03:53:04.965140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.484 [2024-10-01 03:53:04.965149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:12.484 [2024-10-01 03:53:04.965158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.484 [2024-10-01 03:53:04.965167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.745 [2024-10-01 03:53:05.050065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.745 [2024-10-01 03:53:05.050253] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:12.745 [2024-10-01 03:53:05.050275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.745 [2024-10-01 03:53:05.050291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.745 [2024-10-01 03:53:05.119850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.745 [2024-10-01 03:53:05.119902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:12.745 [2024-10-01 03:53:05.119915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.745 [2024-10-01 03:53:05.119924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.745 [2024-10-01 03:53:05.119984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.745 [2024-10-01 03:53:05.119993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:12.745 [2024-10-01 03:53:05.120153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.745 [2024-10-01 03:53:05.120185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.745 [2024-10-01 03:53:05.120272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.746 [2024-10-01 03:53:05.120285] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:12.746 [2024-10-01 03:53:05.120295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.746 [2024-10-01 03:53:05.120303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.746 [2024-10-01 03:53:05.120392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.746 [2024-10-01 03:53:05.120403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:12.746 [2024-10-01 03:53:05.120412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.746 [2024-10-01 03:53:05.120420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.746 [2024-10-01 03:53:05.120446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.746 [2024-10-01 03:53:05.120460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:12.746 [2024-10-01 03:53:05.120468] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.746 [2024-10-01 03:53:05.120477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.746 [2024-10-01 03:53:05.120517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.746 [2024-10-01 03:53:05.120527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:12.746 [2024-10-01 03:53:05.120535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.746 [2024-10-01 03:53:05.120544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.746 [2024-10-01 03:53:05.120589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:12.746 [2024-10-01 03:53:05.120603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:12.746 [2024-10-01 03:53:05.120612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:12.746 [2024-10-01 03:53:05.120620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:12.746 [2024-10-01 03:53:05.120753] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 210.551 ms, result 0 00:26:13.688 00:26:13.688 00:26:13.688 03:53:05 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:15.086 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:15.086 03:53:07 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:26:15.347 [2024-10-01 03:53:07.654575] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:26:15.347 [2024-10-01 03:53:07.654844] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80271 ] 00:26:15.347 [2024-10-01 03:53:07.806401] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:15.609 [2024-10-01 03:53:08.027080] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:26:15.870 [2024-10-01 03:53:08.314103] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:15.870 [2024-10-01 03:53:08.314188] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:16.132 [2024-10-01 03:53:08.474661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.132 [2024-10-01 03:53:08.474718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:16.132 [2024-10-01 03:53:08.474733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:16.132 [2024-10-01 03:53:08.474746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.132 [2024-10-01 03:53:08.474800] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.132 [2024-10-01 03:53:08.474812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:16.132 [2024-10-01 03:53:08.474821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:26:16.132 [2024-10-01 03:53:08.474829] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.132 [2024-10-01 03:53:08.474850] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:16.132 [2024-10-01 03:53:08.475964] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:16.132 [2024-10-01 03:53:08.476044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.132 [2024-10-01 03:53:08.476055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:16.132 [2024-10-01 03:53:08.476067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.199 ms 00:26:16.132 [2024-10-01 03:53:08.476075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.476419] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:26:16.133 [2024-10-01 03:53:08.476446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.476456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:16.133 [2024-10-01 03:53:08.476466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:16.133 [2024-10-01 03:53:08.476475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.476525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.476535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:16.133 [2024-10-01 03:53:08.476543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:26:16.133 [2024-10-01 03:53:08.476555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.476825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.476837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:16.133 [2024-10-01 03:53:08.476846] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.237 ms 00:26:16.133 [2024-10-01 03:53:08.476854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.476925] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.476934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:16.133 [2024-10-01 03:53:08.476945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:26:16.133 [2024-10-01 03:53:08.476953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.476975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.476984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:16.133 [2024-10-01 03:53:08.476992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:16.133 [2024-10-01 03:53:08.477019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.477042] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:16.133 [2024-10-01 03:53:08.481375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.481414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:16.133 [2024-10-01 03:53:08.481425] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.337 ms 00:26:16.133 [2024-10-01 03:53:08.481432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.481465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.481473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:16.133 [2024-10-01 03:53:08.481484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:26:16.133 [2024-10-01 03:53:08.481492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.481553] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:16.133 [2024-10-01 03:53:08.481579] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:16.133 [2024-10-01 03:53:08.481614] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:16.133 [2024-10-01 03:53:08.481630] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:16.133 [2024-10-01 03:53:08.481734] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:16.133 [2024-10-01 03:53:08.481749] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:16.133 [2024-10-01 03:53:08.481760] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:16.133 [2024-10-01 03:53:08.481772] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:16.133 [2024-10-01 03:53:08.481781] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:16.133 [2024-10-01 03:53:08.481789] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:16.133 [2024-10-01 03:53:08.481797] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:16.133 [2024-10-01 03:53:08.481806] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:16.133 [2024-10-01 03:53:08.481814] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:16.133 [2024-10-01 03:53:08.481822] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.481830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:16.133 [2024-10-01 03:53:08.481838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:26:16.133 [2024-10-01 03:53:08.481847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.481933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.481942] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:16.133 [2024-10-01 03:53:08.481950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:26:16.133 [2024-10-01 03:53:08.481956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.482132] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:16.133 [2024-10-01 03:53:08.482146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:16.133 [2024-10-01 03:53:08.482155] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.133 [2024-10-01 03:53:08.482163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482174] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:16.133 [2024-10-01 03:53:08.482181] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482188] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:16.133 [2024-10-01 03:53:08.482196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:16.133 [2024-10-01 03:53:08.482204] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482211] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.133 [2024-10-01 03:53:08.482217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:16.133 [2024-10-01 03:53:08.482225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:16.133 [2024-10-01 03:53:08.482232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:16.133 [2024-10-01 03:53:08.482239] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:16.133 [2024-10-01 03:53:08.482247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:16.133 [2024-10-01 03:53:08.482260] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482267] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:16.133 [2024-10-01 03:53:08.482276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:16.133 [2024-10-01 03:53:08.482283] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:16.133 [2024-10-01 03:53:08.482299] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.133 [2024-10-01 03:53:08.482312] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:16.133 [2024-10-01 03:53:08.482319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.133 [2024-10-01 03:53:08.482332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:16.133 [2024-10-01 03:53:08.482340] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482346] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.133 [2024-10-01 03:53:08.482353] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:16.133 [2024-10-01 03:53:08.482359] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:16.133 [2024-10-01 03:53:08.482373] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:16.133 [2024-10-01 03:53:08.482380] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482386] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.133 [2024-10-01 03:53:08.482393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:16.133 [2024-10-01 03:53:08.482400] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:16.133 [2024-10-01 03:53:08.482407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:16.133 [2024-10-01 03:53:08.482413] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:16.133 [2024-10-01 03:53:08.482420] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:16.133 [2024-10-01 03:53:08.482426] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:16.133 [2024-10-01 03:53:08.482440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:16.133 [2024-10-01 03:53:08.482446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482452] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:16.133 [2024-10-01 03:53:08.482460] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:16.133 [2024-10-01 03:53:08.482468] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:16.133 [2024-10-01 03:53:08.482477] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:16.133 [2024-10-01 03:53:08.482485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:16.133 [2024-10-01 03:53:08.482492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:16.133 [2024-10-01 03:53:08.482499] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:16.133 [2024-10-01 03:53:08.482507] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:16.133 [2024-10-01 03:53:08.482514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:16.133 [2024-10-01 03:53:08.482521] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:16.133 [2024-10-01 03:53:08.482530] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:16.133 [2024-10-01 03:53:08.482539] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.133 [2024-10-01 03:53:08.482548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:16.133 [2024-10-01 03:53:08.482555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:16.133 [2024-10-01 03:53:08.482563] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:16.133 [2024-10-01 03:53:08.482570] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:16.133 [2024-10-01 03:53:08.482577] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:16.133 [2024-10-01 03:53:08.482585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:16.133 [2024-10-01 03:53:08.482592] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:16.133 [2024-10-01 03:53:08.482600] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:16.133 [2024-10-01 03:53:08.482607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:16.133 [2024-10-01 03:53:08.482614] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:16.133 [2024-10-01 03:53:08.482622] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:16.133 [2024-10-01 03:53:08.482629] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:16.133 [2024-10-01 03:53:08.482636] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:16.133 [2024-10-01 03:53:08.482643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:16.133 [2024-10-01 03:53:08.482650] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:16.133 [2024-10-01 03:53:08.482659] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:16.133 [2024-10-01 03:53:08.482667] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:16.133 [2024-10-01 03:53:08.482675] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:16.133 [2024-10-01 03:53:08.482682] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:16.133 [2024-10-01 03:53:08.482689] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:16.133 [2024-10-01 03:53:08.482697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.482704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:16.133 [2024-10-01 03:53:08.482715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.648 ms 00:26:16.133 [2024-10-01 03:53:08.482725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.522409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.522459] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:16.133 [2024-10-01 03:53:08.522476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.642 ms 00:26:16.133 [2024-10-01 03:53:08.522485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.522581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.522591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:16.133 [2024-10-01 03:53:08.522604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:26:16.133 [2024-10-01 03:53:08.522611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.557507] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.557713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:16.133 [2024-10-01 03:53:08.557736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.834 ms 00:26:16.133 [2024-10-01 03:53:08.557744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.557783] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.557792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:16.133 [2024-10-01 03:53:08.557802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:16.133 [2024-10-01 03:53:08.557810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.557911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.557930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:16.133 [2024-10-01 03:53:08.557938] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:26:16.133 [2024-10-01 03:53:08.557947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.558116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.558127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:16.133 [2024-10-01 03:53:08.558136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.153 ms 00:26:16.133 [2024-10-01 03:53:08.558144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.572694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.572851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:16.133 [2024-10-01 03:53:08.572869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.531 ms 00:26:16.133 [2024-10-01 03:53:08.572877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.573057] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:16.133 [2024-10-01 03:53:08.573073] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:16.133 [2024-10-01 03:53:08.573083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.573091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:16.133 [2024-10-01 03:53:08.573100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:26:16.133 [2024-10-01 03:53:08.573108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.585428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.585469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:16.133 [2024-10-01 03:53:08.585480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.300 ms 00:26:16.133 [2024-10-01 03:53:08.585492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.585613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.585622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:16.133 [2024-10-01 03:53:08.585632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:26:16.133 [2024-10-01 03:53:08.585639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.133 [2024-10-01 03:53:08.585689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.133 [2024-10-01 03:53:08.585698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:16.133 [2024-10-01 03:53:08.585706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:26:16.133 [2024-10-01 03:53:08.585714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.134 [2024-10-01 03:53:08.586336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.134 [2024-10-01 03:53:08.586352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:16.134 [2024-10-01 03:53:08.586361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.582 ms 00:26:16.134 [2024-10-01 03:53:08.586369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.134 [2024-10-01 03:53:08.586387] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:26:16.134 [2024-10-01 03:53:08.586398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.134 [2024-10-01 03:53:08.586407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:16.134 [2024-10-01 03:53:08.586415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:16.134 [2024-10-01 03:53:08.586422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.134 [2024-10-01 03:53:08.599047] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:16.134 [2024-10-01 03:53:08.599317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.134 [2024-10-01 03:53:08.599336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:16.134 [2024-10-01 03:53:08.599347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.875 ms 00:26:16.134 [2024-10-01 03:53:08.599354] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.134 [2024-10-01 03:53:08.601513] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.134 [2024-10-01 03:53:08.601541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:16.134 [2024-10-01 03:53:08.601552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.135 ms 00:26:16.134 [2024-10-01 03:53:08.601560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.134 [2024-10-01 03:53:08.601650] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.134 [2024-10-01 03:53:08.601665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:16.134 [2024-10-01 03:53:08.601674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:26:16.134 [2024-10-01 03:53:08.601683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.134 [2024-10-01 03:53:08.601708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.134 [2024-10-01 03:53:08.601718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:16.134 [2024-10-01 03:53:08.601726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:16.134 [2024-10-01 03:53:08.601734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.134 [2024-10-01 03:53:08.601764] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:16.134 [2024-10-01 03:53:08.601773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.134 [2024-10-01 03:53:08.601781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:16.134 [2024-10-01 03:53:08.601790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:16.134 [2024-10-01 03:53:08.601798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.134 [2024-10-01 03:53:08.628290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.134 [2024-10-01 03:53:08.628340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:16.134 [2024-10-01 03:53:08.628353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.471 ms 00:26:16.134 [2024-10-01 03:53:08.628362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.134 [2024-10-01 03:53:08.628450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:16.134 [2024-10-01 03:53:08.628466] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:16.134 [2024-10-01 03:53:08.628476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:26:16.134 [2024-10-01 03:53:08.628483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:16.134 [2024-10-01 03:53:08.629951] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.835 ms, result 0 00:27:18.736  Copying: 17/1024 [MB] (17 MBps) Copying: 29/1024 [MB] (12 MBps) Copying: 48/1024 [MB] (19 MBps) Copying: 68/1024 [MB] (19 MBps) Copying: 85/1024 [MB] (17 MBps) Copying: 107/1024 [MB] (21 MBps) Copying: 126/1024 [MB] (19 MBps) Copying: 144/1024 [MB] (17 MBps) Copying: 162/1024 [MB] (18 MBps) Copying: 176944/1048576 [kB] (10176 kBps) Copying: 187024/1048576 [kB] (10080 kBps) Copying: 196/1024 [MB] (13 MBps) Copying: 211/1024 [MB] (15 MBps) Copying: 227/1024 [MB] (16 MBps) Copying: 247/1024 [MB] (20 MBps) Copying: 266/1024 [MB] (19 MBps) Copying: 279/1024 [MB] (12 MBps) Copying: 297/1024 [MB] (17 MBps) Copying: 316/1024 [MB] (19 MBps) Copying: 336/1024 [MB] (19 MBps) Copying: 355/1024 [MB] (18 MBps) Copying: 374/1024 [MB] (19 MBps) Copying: 388/1024 [MB] (14 MBps) Copying: 399/1024 [MB] (10 MBps) Copying: 409/1024 [MB] (10 MBps) Copying: 419/1024 [MB] (10 MBps) Copying: 430/1024 [MB] (11 MBps) Copying: 442/1024 [MB] (11 MBps) Copying: 453/1024 [MB] (11 MBps) Copying: 464/1024 [MB] (11 MBps) Copying: 476/1024 [MB] (11 MBps) Copying: 487/1024 [MB] (11 MBps) Copying: 498/1024 [MB] (11 MBps) Copying: 510/1024 [MB] (11 MBps) Copying: 521/1024 [MB] (11 MBps) Copying: 532/1024 [MB] (11 MBps) Copying: 543/1024 [MB] (11 MBps) Copying: 554/1024 [MB] (11 MBps) Copying: 566/1024 [MB] (11 MBps) Copying: 578/1024 [MB] (12 MBps) Copying: 589/1024 [MB] (11 MBps) Copying: 600/1024 [MB] (10 MBps) Copying: 610/1024 [MB] (10 MBps) Copying: 652/1024 [MB] (41 MBps) Copying: 684/1024 [MB] (32 MBps) Copying: 701/1024 [MB] (17 MBps) Copying: 712/1024 [MB] (10 MBps) Copying: 722/1024 [MB] (10 MBps) Copying: 733/1024 [MB] (10 MBps) Copying: 780/1024 [MB] (47 MBps) Copying: 823/1024 [MB] (43 MBps) Copying: 845/1024 [MB] (22 MBps) Copying: 867/1024 [MB] (21 MBps) Copying: 888/1024 [MB] (20 MBps) Copying: 905/1024 [MB] (17 MBps) Copying: 919/1024 [MB] (13 MBps) Copying: 936/1024 [MB] (16 MBps) Copying: 956/1024 [MB] (20 MBps) Copying: 973/1024 [MB] (17 MBps) Copying: 990/1024 [MB] (16 MBps) Copying: 1006/1024 [MB] (16 MBps) Copying: 1023/1024 [MB] (16 MBps) Copying: 1024/1024 [MB] (average 16 MBps)[2024-10-01 03:54:11.027621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.736 [2024-10-01 03:54:11.027787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:18.736 [2024-10-01 03:54:11.027806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:27:18.736 [2024-10-01 03:54:11.027813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.736 [2024-10-01 03:54:11.029640] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:18.736 [2024-10-01 03:54:11.033949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.736 [2024-10-01 03:54:11.033978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:18.736 [2024-10-01 03:54:11.033987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.280 ms 00:27:18.736 [2024-10-01 03:54:11.033995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.736 [2024-10-01 03:54:11.040967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.736 [2024-10-01 03:54:11.040990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:18.736 [2024-10-01 03:54:11.040998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.953 ms 00:27:18.736 [2024-10-01 03:54:11.041013] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.736 [2024-10-01 03:54:11.041035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.736 [2024-10-01 03:54:11.041042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:27:18.736 [2024-10-01 03:54:11.041052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:18.736 [2024-10-01 03:54:11.041057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.736 [2024-10-01 03:54:11.041092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.736 [2024-10-01 03:54:11.041099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:27:18.736 [2024-10-01 03:54:11.041105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:18.736 [2024-10-01 03:54:11.041111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.736 [2024-10-01 03:54:11.041121] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:18.736 [2024-10-01 03:54:11.041130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 126208 / 261120 wr_cnt: 1 state: open 00:27:18.736 [2024-10-01 03:54:11.041137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:18.736 [2024-10-01 03:54:11.041143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:18.736 [2024-10-01 03:54:11.041149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:18.736 [2024-10-01 03:54:11.041155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:18.736 [2024-10-01 03:54:11.041161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:18.736 [2024-10-01 03:54:11.041167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:18.737 [2024-10-01 03:54:11.041552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:18.738 [2024-10-01 03:54:11.041728] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:18.738 [2024-10-01 03:54:11.041735] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 62a17931-fd73-4654-bf05-2a16be32a316 00:27:18.738 [2024-10-01 03:54:11.041742] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 126208 00:27:18.738 [2024-10-01 03:54:11.041748] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 126240 00:27:18.738 [2024-10-01 03:54:11.041753] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 126208 00:27:18.738 [2024-10-01 03:54:11.041760] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:27:18.738 [2024-10-01 03:54:11.041765] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:18.738 [2024-10-01 03:54:11.041771] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:18.738 [2024-10-01 03:54:11.041777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:18.738 [2024-10-01 03:54:11.041782] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:18.738 [2024-10-01 03:54:11.041787] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:18.738 [2024-10-01 03:54:11.041792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.738 [2024-10-01 03:54:11.041798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:18.738 [2024-10-01 03:54:11.041804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.672 ms 00:27:18.738 [2024-10-01 03:54:11.041810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.051612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.738 [2024-10-01 03:54:11.051637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:18.738 [2024-10-01 03:54:11.051645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.791 ms 00:27:18.738 [2024-10-01 03:54:11.051651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.051917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:18.738 [2024-10-01 03:54:11.051928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:18.738 [2024-10-01 03:54:11.051934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.253 ms 00:27:18.738 [2024-10-01 03:54:11.051943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.073982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.738 [2024-10-01 03:54:11.074015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:18.738 [2024-10-01 03:54:11.074022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.738 [2024-10-01 03:54:11.074028] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.074069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.738 [2024-10-01 03:54:11.074075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:18.738 [2024-10-01 03:54:11.074081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.738 [2024-10-01 03:54:11.074089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.074120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.738 [2024-10-01 03:54:11.074127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:18.738 [2024-10-01 03:54:11.074133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.738 [2024-10-01 03:54:11.074138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.074150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.738 [2024-10-01 03:54:11.074156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:18.738 [2024-10-01 03:54:11.074162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.738 [2024-10-01 03:54:11.074167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.132920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.738 [2024-10-01 03:54:11.132948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:18.738 [2024-10-01 03:54:11.132960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.738 [2024-10-01 03:54:11.132965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.180878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.738 [2024-10-01 03:54:11.180905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:18.738 [2024-10-01 03:54:11.180913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.738 [2024-10-01 03:54:11.180923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.180972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.738 [2024-10-01 03:54:11.180979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:18.738 [2024-10-01 03:54:11.180986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.738 [2024-10-01 03:54:11.180991] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.181025] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.738 [2024-10-01 03:54:11.181032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:18.738 [2024-10-01 03:54:11.181038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.738 [2024-10-01 03:54:11.181044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.181102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.738 [2024-10-01 03:54:11.181110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:18.738 [2024-10-01 03:54:11.181116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.738 [2024-10-01 03:54:11.181122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.181139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.738 [2024-10-01 03:54:11.181145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:18.738 [2024-10-01 03:54:11.181151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.738 [2024-10-01 03:54:11.181157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.181184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.738 [2024-10-01 03:54:11.181191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:18.738 [2024-10-01 03:54:11.181197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.738 [2024-10-01 03:54:11.181203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.738 [2024-10-01 03:54:11.181233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:18.739 [2024-10-01 03:54:11.181241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:18.739 [2024-10-01 03:54:11.181247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:18.739 [2024-10-01 03:54:11.181253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:18.739 [2024-10-01 03:54:11.181337] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 155.336 ms, result 0 00:27:20.117 00:27:20.117 00:27:20.117 03:54:12 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:27:20.117 [2024-10-01 03:54:12.459877] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:27:20.117 [2024-10-01 03:54:12.459996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80932 ] 00:27:20.117 [2024-10-01 03:54:12.608778] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:20.376 [2024-10-01 03:54:12.758010] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:27:20.637 [2024-10-01 03:54:12.961176] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:20.637 [2024-10-01 03:54:12.961225] bdev.c:8272:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:20.637 [2024-10-01 03:54:13.117839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.637 [2024-10-01 03:54:13.117887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:20.637 [2024-10-01 03:54:13.117901] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:20.637 [2024-10-01 03:54:13.117913] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.637 [2024-10-01 03:54:13.117956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.637 [2024-10-01 03:54:13.117966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:20.637 [2024-10-01 03:54:13.117974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:27:20.637 [2024-10-01 03:54:13.117981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.637 [2024-10-01 03:54:13.118012] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:20.637 [2024-10-01 03:54:13.118663] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:20.637 [2024-10-01 03:54:13.118679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.637 [2024-10-01 03:54:13.118687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:20.637 [2024-10-01 03:54:13.118695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.682 ms 00:27:20.637 [2024-10-01 03:54:13.118702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.637 [2024-10-01 03:54:13.118934] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:27:20.637 [2024-10-01 03:54:13.118956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.637 [2024-10-01 03:54:13.118964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:20.637 [2024-10-01 03:54:13.118972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:27:20.637 [2024-10-01 03:54:13.118980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.637 [2024-10-01 03:54:13.119033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.637 [2024-10-01 03:54:13.119043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:20.637 [2024-10-01 03:54:13.119051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:27:20.637 [2024-10-01 03:54:13.119059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.637 [2024-10-01 03:54:13.119344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.637 [2024-10-01 03:54:13.119355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:20.637 [2024-10-01 03:54:13.119363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.221 ms 00:27:20.637 [2024-10-01 03:54:13.119370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.637 [2024-10-01 03:54:13.119431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.637 [2024-10-01 03:54:13.119440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:20.637 [2024-10-01 03:54:13.119450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:27:20.637 [2024-10-01 03:54:13.119457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.637 [2024-10-01 03:54:13.119477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.637 [2024-10-01 03:54:13.119485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:20.637 [2024-10-01 03:54:13.119493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:20.637 [2024-10-01 03:54:13.119500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.637 [2024-10-01 03:54:13.119517] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:20.637 [2024-10-01 03:54:13.123131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.637 [2024-10-01 03:54:13.123156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:20.637 [2024-10-01 03:54:13.123165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.617 ms 00:27:20.637 [2024-10-01 03:54:13.123172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.637 [2024-10-01 03:54:13.123205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.637 [2024-10-01 03:54:13.123213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:20.638 [2024-10-01 03:54:13.123224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:27:20.638 [2024-10-01 03:54:13.123232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.638 [2024-10-01 03:54:13.123275] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:20.638 [2024-10-01 03:54:13.123296] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:20.638 [2024-10-01 03:54:13.123330] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:20.638 [2024-10-01 03:54:13.123345] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:20.638 [2024-10-01 03:54:13.123445] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:20.638 [2024-10-01 03:54:13.123457] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:20.638 [2024-10-01 03:54:13.123467] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:20.638 [2024-10-01 03:54:13.123476] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:20.638 [2024-10-01 03:54:13.123485] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:20.638 [2024-10-01 03:54:13.123493] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:20.638 [2024-10-01 03:54:13.123499] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:20.638 [2024-10-01 03:54:13.123506] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:20.638 [2024-10-01 03:54:13.123513] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:20.638 [2024-10-01 03:54:13.123520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.638 [2024-10-01 03:54:13.123527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:20.638 [2024-10-01 03:54:13.123535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.247 ms 00:27:20.638 [2024-10-01 03:54:13.123544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.638 [2024-10-01 03:54:13.123625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.638 [2024-10-01 03:54:13.123633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:20.638 [2024-10-01 03:54:13.123640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:27:20.638 [2024-10-01 03:54:13.123646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.638 [2024-10-01 03:54:13.123745] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:20.638 [2024-10-01 03:54:13.123755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:20.638 [2024-10-01 03:54:13.123763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:20.638 [2024-10-01 03:54:13.123770] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.638 [2024-10-01 03:54:13.123782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:20.638 [2024-10-01 03:54:13.123789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:20.638 [2024-10-01 03:54:13.123797] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:20.638 [2024-10-01 03:54:13.123803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:20.638 [2024-10-01 03:54:13.123810] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:20.638 [2024-10-01 03:54:13.123817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:20.638 [2024-10-01 03:54:13.123824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:20.638 [2024-10-01 03:54:13.123831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:20.638 [2024-10-01 03:54:13.123838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:20.638 [2024-10-01 03:54:13.123844] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:20.638 [2024-10-01 03:54:13.123851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:20.638 [2024-10-01 03:54:13.123863] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.638 [2024-10-01 03:54:13.123870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:20.638 [2024-10-01 03:54:13.123876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:20.638 [2024-10-01 03:54:13.123883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.638 [2024-10-01 03:54:13.123889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:20.638 [2024-10-01 03:54:13.123896] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:20.638 [2024-10-01 03:54:13.123903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.638 [2024-10-01 03:54:13.123909] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:20.638 [2024-10-01 03:54:13.123916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:20.638 [2024-10-01 03:54:13.123922] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.638 [2024-10-01 03:54:13.123929] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:20.638 [2024-10-01 03:54:13.123935] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:20.638 [2024-10-01 03:54:13.123942] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.638 [2024-10-01 03:54:13.123949] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:20.638 [2024-10-01 03:54:13.123955] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:20.638 [2024-10-01 03:54:13.123962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:20.638 [2024-10-01 03:54:13.123968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:20.638 [2024-10-01 03:54:13.123975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:20.638 [2024-10-01 03:54:13.123982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:20.638 [2024-10-01 03:54:13.123989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:20.638 [2024-10-01 03:54:13.123995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:20.638 [2024-10-01 03:54:13.124016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:20.638 [2024-10-01 03:54:13.124024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:20.638 [2024-10-01 03:54:13.124031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:20.638 [2024-10-01 03:54:13.124038] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.638 [2024-10-01 03:54:13.124045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:20.638 [2024-10-01 03:54:13.124051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:20.638 [2024-10-01 03:54:13.124058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.638 [2024-10-01 03:54:13.124064] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:20.638 [2024-10-01 03:54:13.124072] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:20.638 [2024-10-01 03:54:13.124079] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:20.638 [2024-10-01 03:54:13.124086] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:20.638 [2024-10-01 03:54:13.124093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:20.638 [2024-10-01 03:54:13.124100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:20.638 [2024-10-01 03:54:13.124107] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:20.638 [2024-10-01 03:54:13.124113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:20.638 [2024-10-01 03:54:13.124120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:20.638 [2024-10-01 03:54:13.124126] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:20.638 [2024-10-01 03:54:13.124134] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:20.638 [2024-10-01 03:54:13.124143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:20.638 [2024-10-01 03:54:13.124152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:20.638 [2024-10-01 03:54:13.124159] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:20.638 [2024-10-01 03:54:13.124166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:20.638 [2024-10-01 03:54:13.124173] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:20.638 [2024-10-01 03:54:13.124180] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:20.638 [2024-10-01 03:54:13.124187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:20.638 [2024-10-01 03:54:13.124194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:20.638 [2024-10-01 03:54:13.124201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:20.638 [2024-10-01 03:54:13.124209] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:20.638 [2024-10-01 03:54:13.124225] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:20.638 [2024-10-01 03:54:13.124232] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:20.638 [2024-10-01 03:54:13.124239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:20.638 [2024-10-01 03:54:13.124246] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:20.638 [2024-10-01 03:54:13.124254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:20.638 [2024-10-01 03:54:13.124261] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:20.639 [2024-10-01 03:54:13.124269] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:20.639 [2024-10-01 03:54:13.124278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:20.639 [2024-10-01 03:54:13.124285] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:20.639 [2024-10-01 03:54:13.124292] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:20.639 [2024-10-01 03:54:13.124300] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:20.639 [2024-10-01 03:54:13.124308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.639 [2024-10-01 03:54:13.124315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:20.639 [2024-10-01 03:54:13.124326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.630 ms 00:27:20.639 [2024-10-01 03:54:13.124333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.639 [2024-10-01 03:54:13.159809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.639 [2024-10-01 03:54:13.159841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:20.639 [2024-10-01 03:54:13.159855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.434 ms 00:27:20.639 [2024-10-01 03:54:13.159863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.639 [2024-10-01 03:54:13.159946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.639 [2024-10-01 03:54:13.159955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:20.639 [2024-10-01 03:54:13.159965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:27:20.639 [2024-10-01 03:54:13.159973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.190157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.190184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:20.900 [2024-10-01 03:54:13.190193] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.117 ms 00:27:20.900 [2024-10-01 03:54:13.190201] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.190228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.190236] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:20.900 [2024-10-01 03:54:13.190244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:27:20.900 [2024-10-01 03:54:13.190251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.190330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.190343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:20.900 [2024-10-01 03:54:13.190351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:27:20.900 [2024-10-01 03:54:13.190358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.190466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.190481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:20.900 [2024-10-01 03:54:13.190489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:27:20.900 [2024-10-01 03:54:13.190496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.202966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.202995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:20.900 [2024-10-01 03:54:13.203022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.453 ms 00:27:20.900 [2024-10-01 03:54:13.203030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.203135] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:20.900 [2024-10-01 03:54:13.203147] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:20.900 [2024-10-01 03:54:13.203155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.203163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:20.900 [2024-10-01 03:54:13.203171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:27:20.900 [2024-10-01 03:54:13.203178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.215570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.215592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:20.900 [2024-10-01 03:54:13.215602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.378 ms 00:27:20.900 [2024-10-01 03:54:13.215613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.215719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.215728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:20.900 [2024-10-01 03:54:13.215736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:27:20.900 [2024-10-01 03:54:13.215743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.215796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.215806] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:20.900 [2024-10-01 03:54:13.215814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.001 ms 00:27:20.900 [2024-10-01 03:54:13.215821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.216405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.216422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:20.900 [2024-10-01 03:54:13.216430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.549 ms 00:27:20.900 [2024-10-01 03:54:13.216437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.216452] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:27:20.900 [2024-10-01 03:54:13.216461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.216469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:20.900 [2024-10-01 03:54:13.216476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:27:20.900 [2024-10-01 03:54:13.216483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.227581] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:20.900 [2024-10-01 03:54:13.227750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.227760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:20.900 [2024-10-01 03:54:13.227769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.250 ms 00:27:20.900 [2024-10-01 03:54:13.227776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.230019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.230040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:20.900 [2024-10-01 03:54:13.230049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.225 ms 00:27:20.900 [2024-10-01 03:54:13.230057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.230115] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:27:20.900 [2024-10-01 03:54:13.230554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.230569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:20.900 [2024-10-01 03:54:13.230577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.454 ms 00:27:20.900 [2024-10-01 03:54:13.230585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.230608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.230616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:20.900 [2024-10-01 03:54:13.230624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:20.900 [2024-10-01 03:54:13.230631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.230659] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:20.900 [2024-10-01 03:54:13.230671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.230678] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:20.900 [2024-10-01 03:54:13.230685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:27:20.900 [2024-10-01 03:54:13.230693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.254740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.254771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:20.900 [2024-10-01 03:54:13.254782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.031 ms 00:27:20.900 [2024-10-01 03:54:13.254790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.254865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:20.900 [2024-10-01 03:54:13.254874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:20.900 [2024-10-01 03:54:13.254882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:27:20.900 [2024-10-01 03:54:13.254889] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:20.900 [2024-10-01 03:54:13.259403] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 139.425 ms, result 0 00:28:28.012  Copying: 11/1024 [MB] (11 MBps) Copying: 22/1024 [MB] (10 MBps) Copying: 33/1024 [MB] (11 MBps) Copying: 49/1024 [MB] (16 MBps) Copying: 73/1024 [MB] (23 MBps) Copying: 89/1024 [MB] (16 MBps) Copying: 107/1024 [MB] (18 MBps) Copying: 128/1024 [MB] (20 MBps) Copying: 143/1024 [MB] (14 MBps) Copying: 161/1024 [MB] (18 MBps) Copying: 192/1024 [MB] (30 MBps) Copying: 216/1024 [MB] (24 MBps) Copying: 233/1024 [MB] (17 MBps) Copying: 243/1024 [MB] (10 MBps) Copying: 266/1024 [MB] (22 MBps) Copying: 277/1024 [MB] (10 MBps) Copying: 293/1024 [MB] (15 MBps) Copying: 303/1024 [MB] (10 MBps) Copying: 314/1024 [MB] (10 MBps) Copying: 325/1024 [MB] (10 MBps) Copying: 342/1024 [MB] (17 MBps) Copying: 365/1024 [MB] (23 MBps) Copying: 384/1024 [MB] (18 MBps) Copying: 402/1024 [MB] (18 MBps) Copying: 421/1024 [MB] (18 MBps) Copying: 446/1024 [MB] (24 MBps) Copying: 457/1024 [MB] (11 MBps) Copying: 468/1024 [MB] (10 MBps) Copying: 478/1024 [MB] (10 MBps) Copying: 489/1024 [MB] (10 MBps) Copying: 511/1024 [MB] (22 MBps) Copying: 523/1024 [MB] (11 MBps) Copying: 533/1024 [MB] (10 MBps) Copying: 544/1024 [MB] (10 MBps) Copying: 555/1024 [MB] (10 MBps) Copying: 569/1024 [MB] (14 MBps) Copying: 586/1024 [MB] (16 MBps) Copying: 601/1024 [MB] (15 MBps) Copying: 614/1024 [MB] (12 MBps) Copying: 625/1024 [MB] (11 MBps) Copying: 640/1024 [MB] (14 MBps) Copying: 660/1024 [MB] (19 MBps) Copying: 675/1024 [MB] (15 MBps) Copying: 689/1024 [MB] (14 MBps) Copying: 700/1024 [MB] (10 MBps) Copying: 710/1024 [MB] (10 MBps) Copying: 724/1024 [MB] (13 MBps) Copying: 739/1024 [MB] (14 MBps) Copying: 752/1024 [MB] (13 MBps) Copying: 765/1024 [MB] (13 MBps) Copying: 789/1024 [MB] (23 MBps) Copying: 804/1024 [MB] (15 MBps) Copying: 816/1024 [MB] (12 MBps) Copying: 827/1024 [MB] (10 MBps) Copying: 841/1024 [MB] (14 MBps) Copying: 862/1024 [MB] (20 MBps) Copying: 879/1024 [MB] (17 MBps) Copying: 894/1024 [MB] (15 MBps) Copying: 910/1024 [MB] (15 MBps) Copying: 922/1024 [MB] (12 MBps) Copying: 943/1024 [MB] (20 MBps) Copying: 959/1024 [MB] (16 MBps) Copying: 972/1024 [MB] (12 MBps) Copying: 983/1024 [MB] (10 MBps) Copying: 994/1024 [MB] (10 MBps) Copying: 1004/1024 [MB] (10 MBps) Copying: 1022/1024 [MB] (18 MBps) Copying: 1024/1024 [MB] (average 15 MBps)[2024-10-01 03:55:20.493348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.013 [2024-10-01 03:55:20.493401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:28.013 [2024-10-01 03:55:20.493416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:28:28.013 [2024-10-01 03:55:20.493425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.013 [2024-10-01 03:55:20.493450] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:28.013 [2024-10-01 03:55:20.496218] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.013 [2024-10-01 03:55:20.496251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:28.013 [2024-10-01 03:55:20.496262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.753 ms 00:28:28.013 [2024-10-01 03:55:20.496271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.013 [2024-10-01 03:55:20.496494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.013 [2024-10-01 03:55:20.496505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:28.013 [2024-10-01 03:55:20.496514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:28:28.013 [2024-10-01 03:55:20.496522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.013 [2024-10-01 03:55:20.496552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.013 [2024-10-01 03:55:20.496561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:28:28.013 [2024-10-01 03:55:20.496569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:28.013 [2024-10-01 03:55:20.496577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.013 [2024-10-01 03:55:20.496623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.013 [2024-10-01 03:55:20.496632] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:28:28.013 [2024-10-01 03:55:20.496641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:28:28.013 [2024-10-01 03:55:20.496648] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.013 [2024-10-01 03:55:20.496662] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:28.013 [2024-10-01 03:55:20.496674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:28:28.013 [2024-10-01 03:55:20.496687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496801] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.496997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497105] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:28.013 [2024-10-01 03:55:20.497152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497310] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:28.014 [2024-10-01 03:55:20.497475] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:28.014 [2024-10-01 03:55:20.497482] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 62a17931-fd73-4654-bf05-2a16be32a316 00:28:28.014 [2024-10-01 03:55:20.497490] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:28:28.014 [2024-10-01 03:55:20.497497] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 4896 00:28:28.014 [2024-10-01 03:55:20.497504] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 4864 00:28:28.014 [2024-10-01 03:55:20.497512] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0066 00:28:28.014 [2024-10-01 03:55:20.497520] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:28.014 [2024-10-01 03:55:20.497528] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:28.014 [2024-10-01 03:55:20.497535] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:28.014 [2024-10-01 03:55:20.497541] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:28.014 [2024-10-01 03:55:20.497547] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:28.014 [2024-10-01 03:55:20.497554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.014 [2024-10-01 03:55:20.497565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:28.014 [2024-10-01 03:55:20.497572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.893 ms 00:28:28.014 [2024-10-01 03:55:20.497583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.014 [2024-10-01 03:55:20.511030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.014 [2024-10-01 03:55:20.511069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:28.014 [2024-10-01 03:55:20.511082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.432 ms 00:28:28.014 [2024-10-01 03:55:20.511091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.014 [2024-10-01 03:55:20.511457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:28.014 [2024-10-01 03:55:20.511482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:28.014 [2024-10-01 03:55:20.511491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:28:28.014 [2024-10-01 03:55:20.511499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.014 [2024-10-01 03:55:20.541923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.014 [2024-10-01 03:55:20.541963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:28.014 [2024-10-01 03:55:20.541975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.014 [2024-10-01 03:55:20.541985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.014 [2024-10-01 03:55:20.542069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.014 [2024-10-01 03:55:20.542086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:28.014 [2024-10-01 03:55:20.542095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.014 [2024-10-01 03:55:20.542104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.014 [2024-10-01 03:55:20.542154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.014 [2024-10-01 03:55:20.542166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:28.014 [2024-10-01 03:55:20.542176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.014 [2024-10-01 03:55:20.542185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.014 [2024-10-01 03:55:20.542202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.014 [2024-10-01 03:55:20.542211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:28.014 [2024-10-01 03:55:20.542223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.014 [2024-10-01 03:55:20.542232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.275 [2024-10-01 03:55:20.626578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.275 [2024-10-01 03:55:20.626631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:28.275 [2024-10-01 03:55:20.626644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.275 [2024-10-01 03:55:20.626653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.275 [2024-10-01 03:55:20.696919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.275 [2024-10-01 03:55:20.696978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:28.275 [2024-10-01 03:55:20.696991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.275 [2024-10-01 03:55:20.697014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.275 [2024-10-01 03:55:20.697098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.275 [2024-10-01 03:55:20.697109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:28.275 [2024-10-01 03:55:20.697118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.275 [2024-10-01 03:55:20.697127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.275 [2024-10-01 03:55:20.697165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.275 [2024-10-01 03:55:20.697175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:28.275 [2024-10-01 03:55:20.697185] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.275 [2024-10-01 03:55:20.697198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.275 [2024-10-01 03:55:20.697284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.275 [2024-10-01 03:55:20.697296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:28.275 [2024-10-01 03:55:20.697304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.275 [2024-10-01 03:55:20.697313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.275 [2024-10-01 03:55:20.697338] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.275 [2024-10-01 03:55:20.697347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:28.275 [2024-10-01 03:55:20.697355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.275 [2024-10-01 03:55:20.697364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.275 [2024-10-01 03:55:20.697407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.275 [2024-10-01 03:55:20.697415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:28.275 [2024-10-01 03:55:20.697424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.275 [2024-10-01 03:55:20.697434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.275 [2024-10-01 03:55:20.697480] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:28.275 [2024-10-01 03:55:20.697491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:28.275 [2024-10-01 03:55:20.697500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:28.275 [2024-10-01 03:55:20.697512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:28.275 [2024-10-01 03:55:20.697644] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 204.259 ms, result 0 00:28:29.216 00:28:29.216 00:28:29.216 03:55:21 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:31.760 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:31.760 03:55:23 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:28:31.760 03:55:23 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:28:31.760 03:55:23 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:28:31.760 03:55:23 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:31.760 03:55:23 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:31.760 03:55:23 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 78625 00:28:31.760 03:55:23 ftl.ftl_restore_fast -- common/autotest_common.sh@950 -- # '[' -z 78625 ']' 00:28:31.760 03:55:23 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # kill -0 78625 00:28:31.760 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (78625) - No such process 00:28:31.760 Process with pid 78625 is not found 00:28:31.761 03:55:23 ftl.ftl_restore_fast -- common/autotest_common.sh@977 -- # echo 'Process with pid 78625 is not found' 00:28:31.761 Remove shared memory files 00:28:31.761 03:55:23 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:28:31.761 03:55:23 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:31.761 03:55:23 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:28:31.761 03:55:23 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_band_md /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_l2p_l1 /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_l2p_l2 /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_l2p_l2_ctx /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_nvc_md /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_p2l_pool /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_sb /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_sb_shm /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_trim_bitmap /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_trim_log /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_trim_md /dev/hugepages/ftl_62a17931-fd73-4654-bf05-2a16be32a316_vmap 00:28:31.761 03:55:23 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:28:31.761 03:55:23 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:31.761 03:55:23 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:28:31.761 00:28:31.761 real 4m56.702s 00:28:31.761 user 4m45.051s 00:28:31.761 sys 0m11.336s 00:28:31.761 03:55:23 ftl.ftl_restore_fast -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:31.761 ************************************ 00:28:31.761 03:55:23 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:28:31.761 END TEST ftl_restore_fast 00:28:31.761 ************************************ 00:28:31.761 03:55:24 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:31.761 03:55:24 ftl -- ftl/ftl.sh@14 -- # killprocess 72816 00:28:31.761 03:55:24 ftl -- common/autotest_common.sh@950 -- # '[' -z 72816 ']' 00:28:31.761 03:55:24 ftl -- common/autotest_common.sh@954 -- # kill -0 72816 00:28:31.761 Process with pid 72816 is not found 00:28:31.761 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (72816) - No such process 00:28:31.761 03:55:24 ftl -- common/autotest_common.sh@977 -- # echo 'Process with pid 72816 is not found' 00:28:31.761 03:55:24 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:31.761 03:55:24 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81674 00:28:31.761 03:55:24 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81674 00:28:31.761 03:55:24 ftl -- common/autotest_common.sh@831 -- # '[' -z 81674 ']' 00:28:31.761 03:55:24 ftl -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.761 03:55:24 ftl -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:31.761 03:55:24 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:31.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.761 03:55:24 ftl -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.761 03:55:24 ftl -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:31.761 03:55:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:31.761 [2024-10-01 03:55:24.123458] Starting SPDK v25.01-pre git sha1 09cc66129 / DPDK 24.03.0 initialization... 00:28:31.761 [2024-10-01 03:55:24.123641] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81674 ] 00:28:31.761 [2024-10-01 03:55:24.282743] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.020 [2024-10-01 03:55:24.509542] reactor.c: 990:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.958 03:55:25 ftl -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:32.958 03:55:25 ftl -- common/autotest_common.sh@864 -- # return 0 00:28:32.958 03:55:25 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:32.958 nvme0n1 00:28:32.958 03:55:25 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:32.958 03:55:25 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:32.958 03:55:25 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:33.217 03:55:25 ftl -- ftl/common.sh@28 -- # stores=5a51668e-cbea-4ddd-a7ff-815d46a3004c 00:28:33.217 03:55:25 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:33.217 03:55:25 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a51668e-cbea-4ddd-a7ff-815d46a3004c 00:28:33.477 03:55:25 ftl -- ftl/ftl.sh@23 -- # killprocess 81674 00:28:33.477 03:55:25 ftl -- common/autotest_common.sh@950 -- # '[' -z 81674 ']' 00:28:33.477 03:55:25 ftl -- common/autotest_common.sh@954 -- # kill -0 81674 00:28:33.477 03:55:25 ftl -- common/autotest_common.sh@955 -- # uname 00:28:33.477 03:55:25 ftl -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:33.477 03:55:25 ftl -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81674 00:28:33.477 03:55:25 ftl -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:33.477 killing process with pid 81674 00:28:33.477 03:55:25 ftl -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:33.477 03:55:25 ftl -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81674' 00:28:33.477 03:55:25 ftl -- common/autotest_common.sh@969 -- # kill 81674 00:28:33.477 03:55:25 ftl -- common/autotest_common.sh@974 -- # wait 81674 00:28:34.856 03:55:27 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:35.116 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:35.116 Waiting for block devices as requested 00:28:35.375 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:35.375 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:35.375 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:28:35.634 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:28:40.957 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:28:40.957 03:55:33 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:28:40.957 Remove shared memory files 00:28:40.957 03:55:33 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:40.957 03:55:33 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:28:40.957 03:55:33 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:28:40.957 03:55:33 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:28:40.957 03:55:33 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:40.957 03:55:33 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:28:40.957 ************************************ 00:28:40.957 END TEST ftl 00:28:40.957 ************************************ 00:28:40.957 00:28:40.957 real 12m52.568s 00:28:40.957 user 14m36.916s 00:28:40.957 sys 1m23.414s 00:28:40.957 03:55:33 ftl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:40.957 03:55:33 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:40.957 03:55:33 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:28:40.957 03:55:33 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:28:40.957 03:55:33 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:28:40.957 03:55:33 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:28:40.957 03:55:33 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:28:40.957 03:55:33 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:28:40.957 03:55:33 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:28:40.957 03:55:33 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:28:40.957 03:55:33 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:28:40.957 03:55:33 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:28:40.957 03:55:33 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:40.957 03:55:33 -- common/autotest_common.sh@10 -- # set +x 00:28:40.957 03:55:33 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:28:40.957 03:55:33 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:28:40.957 03:55:33 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:28:40.957 03:55:33 -- common/autotest_common.sh@10 -- # set +x 00:28:41.905 INFO: APP EXITING 00:28:41.905 INFO: killing all VMs 00:28:41.905 INFO: killing vhost app 00:28:41.905 INFO: EXIT DONE 00:28:42.476 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:42.736 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:28:42.736 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:28:42.736 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:28:42.736 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:28:43.306 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:43.567 Cleaning 00:28:43.567 Removing: /var/run/dpdk/spdk0/config 00:28:43.567 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:28:43.567 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:28:43.567 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:28:43.567 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:28:43.567 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:28:43.567 Removing: /var/run/dpdk/spdk0/hugepage_info 00:28:43.567 Removing: /var/run/dpdk/spdk0 00:28:43.567 Removing: /var/run/dpdk/spdk_pid57313 00:28:43.567 Removing: /var/run/dpdk/spdk_pid57521 00:28:43.567 Removing: /var/run/dpdk/spdk_pid57733 00:28:43.567 Removing: /var/run/dpdk/spdk_pid57832 00:28:43.567 Removing: /var/run/dpdk/spdk_pid57871 00:28:43.567 Removing: /var/run/dpdk/spdk_pid57994 00:28:43.567 Removing: /var/run/dpdk/spdk_pid58012 00:28:43.567 Removing: /var/run/dpdk/spdk_pid58216 00:28:43.567 Removing: /var/run/dpdk/spdk_pid58309 00:28:43.567 Removing: /var/run/dpdk/spdk_pid58401 00:28:43.567 Removing: /var/run/dpdk/spdk_pid58511 00:28:43.567 Removing: /var/run/dpdk/spdk_pid58602 00:28:43.567 Removing: /var/run/dpdk/spdk_pid58642 00:28:43.567 Removing: /var/run/dpdk/spdk_pid58684 00:28:43.567 Removing: /var/run/dpdk/spdk_pid58754 00:28:43.567 Removing: /var/run/dpdk/spdk_pid58866 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59302 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59355 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59418 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59434 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59536 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59552 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59649 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59665 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59718 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59736 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59789 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59807 00:28:43.567 Removing: /var/run/dpdk/spdk_pid59967 00:28:43.567 Removing: /var/run/dpdk/spdk_pid60003 00:28:43.567 Removing: /var/run/dpdk/spdk_pid60091 00:28:43.567 Removing: /var/run/dpdk/spdk_pid60264 00:28:43.827 Removing: /var/run/dpdk/spdk_pid60354 00:28:43.827 Removing: /var/run/dpdk/spdk_pid60396 00:28:43.827 Removing: /var/run/dpdk/spdk_pid60842 00:28:43.827 Removing: /var/run/dpdk/spdk_pid60942 00:28:43.827 Removing: /var/run/dpdk/spdk_pid61064 00:28:43.827 Removing: /var/run/dpdk/spdk_pid61117 00:28:43.827 Removing: /var/run/dpdk/spdk_pid61148 00:28:43.827 Removing: /var/run/dpdk/spdk_pid61232 00:28:43.827 Removing: /var/run/dpdk/spdk_pid61849 00:28:43.827 Removing: /var/run/dpdk/spdk_pid61886 00:28:43.827 Removing: /var/run/dpdk/spdk_pid62366 00:28:43.827 Removing: /var/run/dpdk/spdk_pid62464 00:28:43.827 Removing: /var/run/dpdk/spdk_pid62588 00:28:43.827 Removing: /var/run/dpdk/spdk_pid62641 00:28:43.827 Removing: /var/run/dpdk/spdk_pid62661 00:28:43.827 Removing: /var/run/dpdk/spdk_pid62692 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64539 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64676 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64680 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64692 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64738 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64742 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64754 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64799 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64803 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64815 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64854 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64858 00:28:43.827 Removing: /var/run/dpdk/spdk_pid64870 00:28:43.827 Removing: /var/run/dpdk/spdk_pid66227 00:28:43.827 Removing: /var/run/dpdk/spdk_pid66324 00:28:43.827 Removing: /var/run/dpdk/spdk_pid67728 00:28:43.827 Removing: /var/run/dpdk/spdk_pid69099 00:28:43.827 Removing: /var/run/dpdk/spdk_pid69187 00:28:43.827 Removing: /var/run/dpdk/spdk_pid69281 00:28:43.827 Removing: /var/run/dpdk/spdk_pid69368 00:28:43.827 Removing: /var/run/dpdk/spdk_pid69473 00:28:43.827 Removing: /var/run/dpdk/spdk_pid69553 00:28:43.827 Removing: /var/run/dpdk/spdk_pid69696 00:28:43.827 Removing: /var/run/dpdk/spdk_pid70055 00:28:43.827 Removing: /var/run/dpdk/spdk_pid70092 00:28:43.827 Removing: /var/run/dpdk/spdk_pid70548 00:28:43.827 Removing: /var/run/dpdk/spdk_pid70726 00:28:43.827 Removing: /var/run/dpdk/spdk_pid70825 00:28:43.827 Removing: /var/run/dpdk/spdk_pid70941 00:28:43.827 Removing: /var/run/dpdk/spdk_pid70991 00:28:43.827 Removing: /var/run/dpdk/spdk_pid71022 00:28:43.827 Removing: /var/run/dpdk/spdk_pid71317 00:28:43.827 Removing: /var/run/dpdk/spdk_pid71383 00:28:43.827 Removing: /var/run/dpdk/spdk_pid71455 00:28:43.827 Removing: /var/run/dpdk/spdk_pid71859 00:28:43.827 Removing: /var/run/dpdk/spdk_pid72006 00:28:43.827 Removing: /var/run/dpdk/spdk_pid72816 00:28:43.827 Removing: /var/run/dpdk/spdk_pid72948 00:28:43.827 Removing: /var/run/dpdk/spdk_pid73112 00:28:43.827 Removing: /var/run/dpdk/spdk_pid73204 00:28:43.827 Removing: /var/run/dpdk/spdk_pid73515 00:28:43.827 Removing: /var/run/dpdk/spdk_pid73758 00:28:43.827 Removing: /var/run/dpdk/spdk_pid74095 00:28:43.827 Removing: /var/run/dpdk/spdk_pid74277 00:28:43.827 Removing: /var/run/dpdk/spdk_pid74364 00:28:43.827 Removing: /var/run/dpdk/spdk_pid74411 00:28:43.827 Removing: /var/run/dpdk/spdk_pid74504 00:28:43.827 Removing: /var/run/dpdk/spdk_pid74524 00:28:43.827 Removing: /var/run/dpdk/spdk_pid74582 00:28:43.827 Removing: /var/run/dpdk/spdk_pid74743 00:28:43.827 Removing: /var/run/dpdk/spdk_pid74963 00:28:43.827 Removing: /var/run/dpdk/spdk_pid75231 00:28:43.827 Removing: /var/run/dpdk/spdk_pid75511 00:28:43.827 Removing: /var/run/dpdk/spdk_pid75774 00:28:43.827 Removing: /var/run/dpdk/spdk_pid76116 00:28:43.827 Removing: /var/run/dpdk/spdk_pid76248 00:28:43.827 Removing: /var/run/dpdk/spdk_pid76327 00:28:43.827 Removing: /var/run/dpdk/spdk_pid76673 00:28:43.827 Removing: /var/run/dpdk/spdk_pid76732 00:28:43.827 Removing: /var/run/dpdk/spdk_pid77018 00:28:43.827 Removing: /var/run/dpdk/spdk_pid77272 00:28:43.827 Removing: /var/run/dpdk/spdk_pid77621 00:28:43.827 Removing: /var/run/dpdk/spdk_pid77736 00:28:43.827 Removing: /var/run/dpdk/spdk_pid77779 00:28:43.827 Removing: /var/run/dpdk/spdk_pid77836 00:28:43.827 Removing: /var/run/dpdk/spdk_pid77890 00:28:43.827 Removing: /var/run/dpdk/spdk_pid77954 00:28:43.827 Removing: /var/run/dpdk/spdk_pid78142 00:28:43.827 Removing: /var/run/dpdk/spdk_pid78216 00:28:43.827 Removing: /var/run/dpdk/spdk_pid78283 00:28:43.827 Removing: /var/run/dpdk/spdk_pid78339 00:28:43.827 Removing: /var/run/dpdk/spdk_pid78379 00:28:43.827 Removing: /var/run/dpdk/spdk_pid78458 00:28:43.827 Removing: /var/run/dpdk/spdk_pid78625 00:28:43.827 Removing: /var/run/dpdk/spdk_pid78847 00:28:43.827 Removing: /var/run/dpdk/spdk_pid79531 00:28:43.827 Removing: /var/run/dpdk/spdk_pid80271 00:28:43.827 Removing: /var/run/dpdk/spdk_pid80932 00:28:43.827 Removing: /var/run/dpdk/spdk_pid81674 00:28:43.827 Clean 00:28:44.088 03:55:36 -- common/autotest_common.sh@1451 -- # return 0 00:28:44.088 03:55:36 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:28:44.088 03:55:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:44.088 03:55:36 -- common/autotest_common.sh@10 -- # set +x 00:28:44.088 03:55:36 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:28:44.088 03:55:36 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:44.088 03:55:36 -- common/autotest_common.sh@10 -- # set +x 00:28:44.088 03:55:36 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:28:44.088 03:55:36 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:28:44.088 03:55:36 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:28:44.088 03:55:36 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:28:44.088 03:55:36 -- spdk/autotest.sh@394 -- # hostname 00:28:44.088 03:55:36 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:28:44.349 geninfo: WARNING: invalid characters removed from testname! 00:29:10.942 03:56:01 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:12.325 03:56:04 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:14.228 03:56:06 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:16.136 03:56:08 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:18.045 03:56:10 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:20.584 03:56:12 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:22.493 03:56:15 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:22.754 03:56:15 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:29:22.754 03:56:15 -- common/autotest_common.sh@1681 -- $ lcov --version 00:29:22.754 03:56:15 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:29:22.754 03:56:15 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:29:22.754 03:56:15 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:29:22.754 03:56:15 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:29:22.754 03:56:15 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:29:22.754 03:56:15 -- scripts/common.sh@336 -- $ IFS=.-: 00:29:22.754 03:56:15 -- scripts/common.sh@336 -- $ read -ra ver1 00:29:22.754 03:56:15 -- scripts/common.sh@337 -- $ IFS=.-: 00:29:22.754 03:56:15 -- scripts/common.sh@337 -- $ read -ra ver2 00:29:22.754 03:56:15 -- scripts/common.sh@338 -- $ local 'op=<' 00:29:22.754 03:56:15 -- scripts/common.sh@340 -- $ ver1_l=2 00:29:22.754 03:56:15 -- scripts/common.sh@341 -- $ ver2_l=1 00:29:22.754 03:56:15 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:29:22.754 03:56:15 -- scripts/common.sh@344 -- $ case "$op" in 00:29:22.754 03:56:15 -- scripts/common.sh@345 -- $ : 1 00:29:22.754 03:56:15 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:29:22.754 03:56:15 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:22.754 03:56:15 -- scripts/common.sh@365 -- $ decimal 1 00:29:22.754 03:56:15 -- scripts/common.sh@353 -- $ local d=1 00:29:22.754 03:56:15 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:29:22.754 03:56:15 -- scripts/common.sh@355 -- $ echo 1 00:29:22.754 03:56:15 -- scripts/common.sh@365 -- $ ver1[v]=1 00:29:22.754 03:56:15 -- scripts/common.sh@366 -- $ decimal 2 00:29:22.754 03:56:15 -- scripts/common.sh@353 -- $ local d=2 00:29:22.754 03:56:15 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:29:22.754 03:56:15 -- scripts/common.sh@355 -- $ echo 2 00:29:22.754 03:56:15 -- scripts/common.sh@366 -- $ ver2[v]=2 00:29:22.754 03:56:15 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:29:22.754 03:56:15 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:29:22.754 03:56:15 -- scripts/common.sh@368 -- $ return 0 00:29:22.754 03:56:15 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:22.754 03:56:15 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:29:22.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.754 --rc genhtml_branch_coverage=1 00:29:22.754 --rc genhtml_function_coverage=1 00:29:22.754 --rc genhtml_legend=1 00:29:22.754 --rc geninfo_all_blocks=1 00:29:22.754 --rc geninfo_unexecuted_blocks=1 00:29:22.754 00:29:22.754 ' 00:29:22.754 03:56:15 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:29:22.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.754 --rc genhtml_branch_coverage=1 00:29:22.754 --rc genhtml_function_coverage=1 00:29:22.754 --rc genhtml_legend=1 00:29:22.754 --rc geninfo_all_blocks=1 00:29:22.754 --rc geninfo_unexecuted_blocks=1 00:29:22.754 00:29:22.754 ' 00:29:22.754 03:56:15 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:29:22.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.754 --rc genhtml_branch_coverage=1 00:29:22.754 --rc genhtml_function_coverage=1 00:29:22.754 --rc genhtml_legend=1 00:29:22.754 --rc geninfo_all_blocks=1 00:29:22.754 --rc geninfo_unexecuted_blocks=1 00:29:22.754 00:29:22.754 ' 00:29:22.754 03:56:15 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:29:22.754 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.754 --rc genhtml_branch_coverage=1 00:29:22.754 --rc genhtml_function_coverage=1 00:29:22.754 --rc genhtml_legend=1 00:29:22.754 --rc geninfo_all_blocks=1 00:29:22.754 --rc geninfo_unexecuted_blocks=1 00:29:22.754 00:29:22.754 ' 00:29:22.754 03:56:15 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:22.754 03:56:15 -- scripts/common.sh@15 -- $ shopt -s extglob 00:29:22.754 03:56:15 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:22.754 03:56:15 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.754 03:56:15 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.754 03:56:15 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.754 03:56:15 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.754 03:56:15 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.754 03:56:15 -- paths/export.sh@5 -- $ export PATH 00:29:22.754 03:56:15 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.754 03:56:15 -- common/autobuild_common.sh@478 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:29:22.754 03:56:15 -- common/autobuild_common.sh@479 -- $ date +%s 00:29:22.754 03:56:15 -- common/autobuild_common.sh@479 -- $ mktemp -dt spdk_1727754975.XXXXXX 00:29:22.754 03:56:15 -- common/autobuild_common.sh@479 -- $ SPDK_WORKSPACE=/tmp/spdk_1727754975.U5NaIa 00:29:22.754 03:56:15 -- common/autobuild_common.sh@481 -- $ [[ -n '' ]] 00:29:22.754 03:56:15 -- common/autobuild_common.sh@485 -- $ '[' -n '' ']' 00:29:22.754 03:56:15 -- common/autobuild_common.sh@488 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:29:22.754 03:56:15 -- common/autobuild_common.sh@492 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:29:22.754 03:56:15 -- common/autobuild_common.sh@494 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:29:22.754 03:56:15 -- common/autobuild_common.sh@495 -- $ get_config_params 00:29:22.754 03:56:15 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:29:22.754 03:56:15 -- common/autotest_common.sh@10 -- $ set +x 00:29:22.754 03:56:15 -- common/autobuild_common.sh@495 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:29:22.754 03:56:15 -- common/autobuild_common.sh@497 -- $ start_monitor_resources 00:29:22.754 03:56:15 -- pm/common@17 -- $ local monitor 00:29:22.754 03:56:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.754 03:56:15 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:22.754 03:56:15 -- pm/common@25 -- $ sleep 1 00:29:22.754 03:56:15 -- pm/common@21 -- $ date +%s 00:29:22.754 03:56:15 -- pm/common@21 -- $ date +%s 00:29:22.754 03:56:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727754975 00:29:22.754 03:56:15 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1727754975 00:29:22.754 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727754975_collect-cpu-load.pm.log 00:29:22.754 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1727754975_collect-vmstat.pm.log 00:29:23.696 03:56:16 -- common/autobuild_common.sh@498 -- $ trap stop_monitor_resources EXIT 00:29:23.696 03:56:16 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:29:23.696 03:56:16 -- spdk/autopackage.sh@14 -- $ timing_finish 00:29:23.696 03:56:16 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:23.696 03:56:16 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:23.696 03:56:16 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:23.696 03:56:16 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:23.696 03:56:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:29:23.696 03:56:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:29:23.696 03:56:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.696 03:56:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:29:23.957 03:56:16 -- pm/common@44 -- $ pid=83378 00:29:23.957 03:56:16 -- pm/common@50 -- $ kill -TERM 83378 00:29:23.957 03:56:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:23.957 03:56:16 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:29:23.957 03:56:16 -- pm/common@44 -- $ pid=83379 00:29:23.957 03:56:16 -- pm/common@50 -- $ kill -TERM 83379 00:29:23.957 + [[ -n 5034 ]] 00:29:23.957 + sudo kill 5034 00:29:23.968 [Pipeline] } 00:29:23.982 [Pipeline] // timeout 00:29:23.987 [Pipeline] } 00:29:24.001 [Pipeline] // stage 00:29:24.006 [Pipeline] } 00:29:24.019 [Pipeline] // catchError 00:29:24.028 [Pipeline] stage 00:29:24.030 [Pipeline] { (Stop VM) 00:29:24.041 [Pipeline] sh 00:29:24.325 + vagrant halt 00:29:26.874 ==> default: Halting domain... 00:29:33.482 [Pipeline] sh 00:29:33.766 + vagrant destroy -f 00:29:36.312 ==> default: Removing domain... 00:29:36.999 [Pipeline] sh 00:29:37.293 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:29:37.303 [Pipeline] } 00:29:37.318 [Pipeline] // stage 00:29:37.324 [Pipeline] } 00:29:37.338 [Pipeline] // dir 00:29:37.343 [Pipeline] } 00:29:37.357 [Pipeline] // wrap 00:29:37.364 [Pipeline] } 00:29:37.377 [Pipeline] // catchError 00:29:37.387 [Pipeline] stage 00:29:37.389 [Pipeline] { (Epilogue) 00:29:37.404 [Pipeline] sh 00:29:37.689 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:29:42.980 [Pipeline] catchError 00:29:42.981 [Pipeline] { 00:29:42.993 [Pipeline] sh 00:29:43.277 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:29:43.277 Artifacts sizes are good 00:29:43.287 [Pipeline] } 00:29:43.300 [Pipeline] // catchError 00:29:43.310 [Pipeline] archiveArtifacts 00:29:43.317 Archiving artifacts 00:29:43.435 [Pipeline] cleanWs 00:29:43.446 [WS-CLEANUP] Deleting project workspace... 00:29:43.446 [WS-CLEANUP] Deferred wipeout is used... 00:29:43.453 [WS-CLEANUP] done 00:29:43.454 [Pipeline] } 00:29:43.469 [Pipeline] // stage 00:29:43.473 [Pipeline] } 00:29:43.486 [Pipeline] // node 00:29:43.491 [Pipeline] End of Pipeline 00:29:43.532 Finished: SUCCESS