00:00:00.001 Started by upstream project "autotest-per-patch" build number 131880 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.042 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.042 The recommended git tool is: git 00:00:00.043 using credential 00000000-0000-0000-0000-000000000002 00:00:00.046 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.067 Fetching changes from the remote Git repository 00:00:00.070 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.107 Using shallow fetch with depth 1 00:00:00.107 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.107 > git --version # timeout=10 00:00:00.151 > git --version # 'git version 2.39.2' 00:00:00.151 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.189 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.189 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.815 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.826 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.838 Checking out Revision 44e7d6069a399ee2647233b387d68a938882e7b7 (FETCH_HEAD) 00:00:04.838 > git config core.sparsecheckout # timeout=10 00:00:04.847 > git read-tree -mu HEAD # timeout=10 00:00:04.865 > git checkout -f 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=5 00:00:04.885 Commit message: "scripts/bmc: Rework Get NIC Info cmd parser" 00:00:04.885 > git rev-list --no-walk 44e7d6069a399ee2647233b387d68a938882e7b7 # timeout=10 00:00:05.005 [Pipeline] Start of Pipeline 00:00:05.017 [Pipeline] library 00:00:05.018 Loading library shm_lib@master 00:00:05.018 Library shm_lib@master is cached. Copying from home. 00:00:05.030 [Pipeline] node 00:00:05.037 Running on VM-host-SM9 in /var/jenkins/workspace/nvme-vg-autotest_2 00:00:05.039 [Pipeline] { 00:00:05.046 [Pipeline] catchError 00:00:05.046 [Pipeline] { 00:00:05.053 [Pipeline] wrap 00:00:05.058 [Pipeline] { 00:00:05.065 [Pipeline] stage 00:00:05.066 [Pipeline] { (Prologue) 00:00:05.082 [Pipeline] echo 00:00:05.084 Node: VM-host-SM9 00:00:05.089 [Pipeline] cleanWs 00:00:05.099 [WS-CLEANUP] Deleting project workspace... 00:00:05.099 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.105 [WS-CLEANUP] done 00:00:05.291 [Pipeline] setCustomBuildProperty 00:00:05.357 [Pipeline] httpRequest 00:00:05.701 [Pipeline] echo 00:00:05.702 Sorcerer 10.211.164.101 is alive 00:00:05.708 [Pipeline] retry 00:00:05.710 [Pipeline] { 00:00:05.720 [Pipeline] httpRequest 00:00:05.723 HttpMethod: GET 00:00:05.724 URL: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:05.725 Sending request to url: http://10.211.164.101/packages/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:05.725 Response Code: HTTP/1.1 200 OK 00:00:05.726 Success: Status code 200 is in the accepted range: 200,404 00:00:05.726 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:06.354 [Pipeline] } 00:00:06.374 [Pipeline] // retry 00:00:06.383 [Pipeline] sh 00:00:06.664 + tar --no-same-owner -xf jbp_44e7d6069a399ee2647233b387d68a938882e7b7.tar.gz 00:00:06.686 [Pipeline] httpRequest 00:00:07.265 [Pipeline] echo 00:00:07.266 Sorcerer 10.211.164.101 is alive 00:00:07.278 [Pipeline] retry 00:00:07.281 [Pipeline] { 00:00:07.303 [Pipeline] httpRequest 00:00:07.307 HttpMethod: GET 00:00:07.307 URL: http://10.211.164.101/packages/spdk_d490b55760dd39f911df3eb10400279eed92132d.tar.gz 00:00:07.307 Sending request to url: http://10.211.164.101/packages/spdk_d490b55760dd39f911df3eb10400279eed92132d.tar.gz 00:00:07.319 Response Code: HTTP/1.1 200 OK 00:00:07.320 Success: Status code 200 is in the accepted range: 200,404 00:00:07.320 Saving response body to /var/jenkins/workspace/nvme-vg-autotest_2/spdk_d490b55760dd39f911df3eb10400279eed92132d.tar.gz 00:00:53.842 [Pipeline] } 00:00:53.859 [Pipeline] // retry 00:00:53.866 [Pipeline] sh 00:00:54.143 + tar --no-same-owner -xf spdk_d490b55760dd39f911df3eb10400279eed92132d.tar.gz 00:00:57.436 [Pipeline] sh 00:00:57.715 + git -C spdk log --oneline -n5 00:00:57.715 d490b5576 nvme/perf: interrupt mode support for pcie controller 00:00:57.715 df16511bb bdev/nvme: interrupt mode for PCIe transport 00:00:57.715 eb2c2fdb8 lib/nvme: eventfd to handle disconnected I/O qpair 00:00:57.715 4141364db nvme/poll_group: create and manage fd_group for nvme poll group 00:00:57.715 f89df0736 nvme: interface to check disconnected queue pairs 00:00:57.733 [Pipeline] writeFile 00:00:57.750 [Pipeline] sh 00:00:58.031 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:00:58.044 [Pipeline] sh 00:00:58.327 + cat autorun-spdk.conf 00:00:58.327 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.327 SPDK_TEST_NVME=1 00:00:58.327 SPDK_TEST_FTL=1 00:00:58.327 SPDK_TEST_ISAL=1 00:00:58.327 SPDK_RUN_ASAN=1 00:00:58.327 SPDK_RUN_UBSAN=1 00:00:58.327 SPDK_TEST_XNVME=1 00:00:58.327 SPDK_TEST_NVME_FDP=1 00:00:58.327 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.333 RUN_NIGHTLY=0 00:00:58.335 [Pipeline] } 00:00:58.352 [Pipeline] // stage 00:00:58.367 [Pipeline] stage 00:00:58.369 [Pipeline] { (Run VM) 00:00:58.383 [Pipeline] sh 00:00:58.665 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:00:58.665 + echo 'Start stage prepare_nvme.sh' 00:00:58.665 Start stage prepare_nvme.sh 00:00:58.665 + [[ -n 2 ]] 00:00:58.665 + disk_prefix=ex2 00:00:58.665 + [[ -n /var/jenkins/workspace/nvme-vg-autotest_2 ]] 00:00:58.665 + [[ -e /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf ]] 00:00:58.665 + source /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf 00:00:58.665 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:58.665 ++ SPDK_TEST_NVME=1 00:00:58.665 ++ SPDK_TEST_FTL=1 00:00:58.665 ++ SPDK_TEST_ISAL=1 00:00:58.665 ++ SPDK_RUN_ASAN=1 00:00:58.665 ++ SPDK_RUN_UBSAN=1 00:00:58.665 ++ SPDK_TEST_XNVME=1 00:00:58.665 ++ SPDK_TEST_NVME_FDP=1 00:00:58.665 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:00:58.665 ++ RUN_NIGHTLY=0 00:00:58.665 + cd /var/jenkins/workspace/nvme-vg-autotest_2 00:00:58.665 + nvme_files=() 00:00:58.665 + declare -A nvme_files 00:00:58.665 + backend_dir=/var/lib/libvirt/images/backends 00:00:58.665 + nvme_files['nvme.img']=5G 00:00:58.665 + nvme_files['nvme-cmb.img']=5G 00:00:58.665 + nvme_files['nvme-multi0.img']=4G 00:00:58.665 + nvme_files['nvme-multi1.img']=4G 00:00:58.665 + nvme_files['nvme-multi2.img']=4G 00:00:58.665 + nvme_files['nvme-openstack.img']=8G 00:00:58.665 + nvme_files['nvme-zns.img']=5G 00:00:58.665 + (( SPDK_TEST_NVME_PMR == 1 )) 00:00:58.665 + (( SPDK_TEST_FTL == 1 )) 00:00:58.665 + nvme_files["nvme-ftl.img"]=6G 00:00:58.665 + (( SPDK_TEST_NVME_FDP == 1 )) 00:00:58.665 + nvme_files["nvme-fdp.img"]=1G 00:00:58.665 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:00:58.665 + for nvme in "${!nvme_files[@]}" 00:00:58.665 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi2.img -s 4G 00:00:58.665 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:00:58.665 + for nvme in "${!nvme_files[@]}" 00:00:58.665 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-ftl.img -s 6G 00:00:58.665 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:00:58.665 + for nvme in "${!nvme_files[@]}" 00:00:58.665 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-cmb.img -s 5G 00:00:58.923 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:00:58.923 + for nvme in "${!nvme_files[@]}" 00:00:58.923 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-openstack.img -s 8G 00:00:59.180 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:00:59.180 + for nvme in "${!nvme_files[@]}" 00:00:59.180 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-zns.img -s 5G 00:00:59.180 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.180 + for nvme in "${!nvme_files[@]}" 00:00:59.180 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi1.img -s 4G 00:00:59.180 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:00:59.180 + for nvme in "${!nvme_files[@]}" 00:00:59.180 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-multi0.img -s 4G 00:00:59.180 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:00:59.180 + for nvme in "${!nvme_files[@]}" 00:00:59.180 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme-fdp.img -s 1G 00:00:59.438 Formatting '/var/lib/libvirt/images/backends/ex2-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:00:59.438 + for nvme in "${!nvme_files[@]}" 00:00:59.438 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex2-nvme.img -s 5G 00:00:59.695 Formatting '/var/lib/libvirt/images/backends/ex2-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:00:59.695 ++ sudo grep -rl ex2-nvme.img /etc/libvirt/qemu 00:00:59.695 + echo 'End stage prepare_nvme.sh' 00:00:59.695 End stage prepare_nvme.sh 00:00:59.706 [Pipeline] sh 00:00:59.986 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:00:59.987 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex2-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex2-nvme.img -b /var/lib/libvirt/images/backends/ex2-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex2-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:00:59.987 00:00:59.987 DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant 00:00:59.987 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest_2/spdk 00:00:59.987 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest_2 00:00:59.987 HELP=0 00:00:59.987 DRY_RUN=0 00:00:59.987 NVME_FILE=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,/var/lib/libvirt/images/backends/ex2-nvme.img,/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,/var/lib/libvirt/images/backends/ex2-nvme-fdp.img, 00:00:59.987 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:00:59.987 NVME_AUTO_CREATE=0 00:00:59.987 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex2-nvme-multi1.img:/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,, 00:00:59.987 NVME_CMB=,,,, 00:00:59.987 NVME_PMR=,,,, 00:00:59.987 NVME_ZNS=,,,, 00:00:59.987 NVME_MS=true,,,, 00:00:59.987 NVME_FDP=,,,on, 00:00:59.987 SPDK_VAGRANT_DISTRO=fedora39 00:00:59.987 SPDK_VAGRANT_VMCPU=10 00:00:59.987 SPDK_VAGRANT_VMRAM=12288 00:00:59.987 SPDK_VAGRANT_PROVIDER=libvirt 00:00:59.987 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:00:59.987 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:00:59.987 SPDK_OPENSTACK_NETWORK=0 00:00:59.987 VAGRANT_PACKAGE_BOX=0 00:00:59.987 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:00:59.987 FORCE_DISTRO=true 00:00:59.987 VAGRANT_BOX_VERSION= 00:00:59.987 EXTRA_VAGRANTFILES= 00:00:59.987 NIC_MODEL=e1000 00:00:59.987 00:00:59.987 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt' 00:00:59.987 /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest_2 00:01:04.179 Bringing machine 'default' up with 'libvirt' provider... 00:01:04.179 ==> default: Creating image (snapshot of base box volume). 00:01:04.457 ==> default: Creating domain with the following settings... 00:01:04.457 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1730137880_6cdc3cadeb3bca9ad73b 00:01:04.457 ==> default: -- Domain type: kvm 00:01:04.457 ==> default: -- Cpus: 10 00:01:04.457 ==> default: -- Feature: acpi 00:01:04.457 ==> default: -- Feature: apic 00:01:04.457 ==> default: -- Feature: pae 00:01:04.457 ==> default: -- Memory: 12288M 00:01:04.457 ==> default: -- Memory Backing: hugepages: 00:01:04.457 ==> default: -- Management MAC: 00:01:04.457 ==> default: -- Loader: 00:01:04.457 ==> default: -- Nvram: 00:01:04.457 ==> default: -- Base box: spdk/fedora39 00:01:04.457 ==> default: -- Storage pool: default 00:01:04.457 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1730137880_6cdc3cadeb3bca9ad73b.img (20G) 00:01:04.457 ==> default: -- Volume Cache: default 00:01:04.457 ==> default: -- Kernel: 00:01:04.457 ==> default: -- Initrd: 00:01:04.457 ==> default: -- Graphics Type: vnc 00:01:04.457 ==> default: -- Graphics Port: -1 00:01:04.457 ==> default: -- Graphics IP: 127.0.0.1 00:01:04.457 ==> default: -- Graphics Password: Not defined 00:01:04.457 ==> default: -- Video Type: cirrus 00:01:04.457 ==> default: -- Video VRAM: 9216 00:01:04.457 ==> default: -- Sound Type: 00:01:04.457 ==> default: -- Keymap: en-us 00:01:04.457 ==> default: -- TPM Path: 00:01:04.457 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:04.457 ==> default: -- Command line args: 00:01:04.457 ==> default: -> value=-device, 00:01:04.457 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:04.457 ==> default: -> value=-drive, 00:01:04.457 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:04.457 ==> default: -> value=-device, 00:01:04.457 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:04.457 ==> default: -> value=-device, 00:01:04.457 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:04.457 ==> default: -> value=-drive, 00:01:04.457 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme.img,if=none,id=nvme-1-drive0, 00:01:04.457 ==> default: -> value=-device, 00:01:04.457 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.457 ==> default: -> value=-device, 00:01:04.457 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:04.457 ==> default: -> value=-drive, 00:01:04.457 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:04.457 ==> default: -> value=-device, 00:01:04.457 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.457 ==> default: -> value=-drive, 00:01:04.457 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:04.457 ==> default: -> value=-device, 00:01:04.457 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.457 ==> default: -> value=-drive, 00:01:04.457 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:04.457 ==> default: -> value=-device, 00:01:04.457 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.457 ==> default: -> value=-device, 00:01:04.457 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:04.457 ==> default: -> value=-device, 00:01:04.457 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:04.457 ==> default: -> value=-drive, 00:01:04.457 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex2-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:04.457 ==> default: -> value=-device, 00:01:04.457 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:04.457 ==> default: Creating shared folders metadata... 00:01:04.457 ==> default: Starting domain. 00:01:05.834 ==> default: Waiting for domain to get an IP address... 00:01:23.941 ==> default: Waiting for SSH to become available... 00:01:23.941 ==> default: Configuring and enabling network interfaces... 00:01:26.469 default: SSH address: 192.168.121.80:22 00:01:26.469 default: SSH username: vagrant 00:01:26.469 default: SSH auth method: private key 00:01:28.368 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:36.477 ==> default: Mounting SSHFS shared folder... 00:01:37.410 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:37.410 ==> default: Checking Mount.. 00:01:38.784 ==> default: Folder Successfully Mounted! 00:01:38.784 ==> default: Running provisioner: file... 00:01:39.351 default: ~/.gitconfig => .gitconfig 00:01:39.917 00:01:39.917 SUCCESS! 00:01:39.917 00:01:39.917 cd to /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:39.917 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:39.917 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:39.917 00:01:39.925 [Pipeline] } 00:01:39.940 [Pipeline] // stage 00:01:39.950 [Pipeline] dir 00:01:39.950 Running in /var/jenkins/workspace/nvme-vg-autotest_2/fedora39-libvirt 00:01:39.952 [Pipeline] { 00:01:39.965 [Pipeline] catchError 00:01:39.966 [Pipeline] { 00:01:39.978 [Pipeline] sh 00:01:40.258 + vagrant ssh-config --host vagrant 00:01:40.258 + sed -ne /^Host/,$p 00:01:40.258 + tee ssh_conf 00:01:44.463 Host vagrant 00:01:44.463 HostName 192.168.121.80 00:01:44.463 User vagrant 00:01:44.463 Port 22 00:01:44.463 UserKnownHostsFile /dev/null 00:01:44.463 StrictHostKeyChecking no 00:01:44.463 PasswordAuthentication no 00:01:44.463 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:44.463 IdentitiesOnly yes 00:01:44.463 LogLevel FATAL 00:01:44.463 ForwardAgent yes 00:01:44.463 ForwardX11 yes 00:01:44.463 00:01:44.527 [Pipeline] withEnv 00:01:44.529 [Pipeline] { 00:01:44.541 [Pipeline] sh 00:01:44.815 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:44.815 source /etc/os-release 00:01:44.815 [[ -e /image.version ]] && img=$(< /image.version) 00:01:44.815 # Minimal, systemd-like check. 00:01:44.815 if [[ -e /.dockerenv ]]; then 00:01:44.815 # Clear garbage from the node's name: 00:01:44.815 # agt-er_autotest_547-896 -> autotest_547-896 00:01:44.815 # $HOSTNAME is the actual container id 00:01:44.815 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:44.815 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:44.815 # We can assume this is a mount from a host where container is running, 00:01:44.815 # so fetch its hostname to easily identify the target swarm worker. 00:01:44.815 container="$(< /etc/hostname) ($agent)" 00:01:44.815 else 00:01:44.815 # Fallback 00:01:44.815 container=$agent 00:01:44.815 fi 00:01:44.815 fi 00:01:44.815 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:44.815 00:01:44.825 [Pipeline] } 00:01:44.844 [Pipeline] // withEnv 00:01:44.852 [Pipeline] setCustomBuildProperty 00:01:44.864 [Pipeline] stage 00:01:44.866 [Pipeline] { (Tests) 00:01:44.882 [Pipeline] sh 00:01:45.162 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:45.434 [Pipeline] sh 00:01:45.715 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:45.987 [Pipeline] timeout 00:01:45.987 Timeout set to expire in 50 min 00:01:45.989 [Pipeline] { 00:01:46.022 [Pipeline] sh 00:01:46.320 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:46.886 HEAD is now at d490b5576 nvme/perf: interrupt mode support for pcie controller 00:01:46.898 [Pipeline] sh 00:01:47.176 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:47.449 [Pipeline] sh 00:01:47.728 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:48.001 [Pipeline] sh 00:01:48.279 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo 00:01:48.279 ++ readlink -f spdk_repo 00:01:48.279 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:48.279 + [[ -n /home/vagrant/spdk_repo ]] 00:01:48.279 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:48.279 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:48.279 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:48.279 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:48.279 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:48.279 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:48.279 + cd /home/vagrant/spdk_repo 00:01:48.279 + source /etc/os-release 00:01:48.279 ++ NAME='Fedora Linux' 00:01:48.279 ++ VERSION='39 (Cloud Edition)' 00:01:48.279 ++ ID=fedora 00:01:48.279 ++ VERSION_ID=39 00:01:48.279 ++ VERSION_CODENAME= 00:01:48.279 ++ PLATFORM_ID=platform:f39 00:01:48.279 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:48.279 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:48.279 ++ LOGO=fedora-logo-icon 00:01:48.279 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:48.279 ++ HOME_URL=https://fedoraproject.org/ 00:01:48.279 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:48.279 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:48.279 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:48.279 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:48.279 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:48.279 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:48.279 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:48.279 ++ SUPPORT_END=2024-11-12 00:01:48.279 ++ VARIANT='Cloud Edition' 00:01:48.279 ++ VARIANT_ID=cloud 00:01:48.279 + uname -a 00:01:48.279 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:48.279 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:48.845 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:49.125 Hugepages 00:01:49.125 node hugesize free / total 00:01:49.125 node0 1048576kB 0 / 0 00:01:49.125 node0 2048kB 0 / 0 00:01:49.125 00:01:49.125 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:49.125 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:49.125 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:01:49.125 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:49.125 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:01:49.407 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:49.407 + rm -f /tmp/spdk-ld-path 00:01:49.407 + source autorun-spdk.conf 00:01:49.407 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.407 ++ SPDK_TEST_NVME=1 00:01:49.407 ++ SPDK_TEST_FTL=1 00:01:49.407 ++ SPDK_TEST_ISAL=1 00:01:49.407 ++ SPDK_RUN_ASAN=1 00:01:49.407 ++ SPDK_RUN_UBSAN=1 00:01:49.407 ++ SPDK_TEST_XNVME=1 00:01:49.407 ++ SPDK_TEST_NVME_FDP=1 00:01:49.407 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.407 ++ RUN_NIGHTLY=0 00:01:49.407 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:49.407 + [[ -n '' ]] 00:01:49.407 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:49.407 + for M in /var/spdk/build-*-manifest.txt 00:01:49.407 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:49.407 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:49.407 + for M in /var/spdk/build-*-manifest.txt 00:01:49.407 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:49.407 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:49.407 + for M in /var/spdk/build-*-manifest.txt 00:01:49.407 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:49.407 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:49.407 ++ uname 00:01:49.407 + [[ Linux == \L\i\n\u\x ]] 00:01:49.407 + sudo dmesg -T 00:01:49.407 + sudo dmesg --clear 00:01:49.407 + dmesg_pid=5295 00:01:49.407 + sudo dmesg -Tw 00:01:49.407 + [[ Fedora Linux == FreeBSD ]] 00:01:49.407 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.407 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:49.407 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:49.407 + [[ -x /usr/src/fio-static/fio ]] 00:01:49.407 + export FIO_BIN=/usr/src/fio-static/fio 00:01:49.407 + FIO_BIN=/usr/src/fio-static/fio 00:01:49.407 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:49.407 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:49.407 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:49.407 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.407 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:49.407 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:49.407 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.407 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:49.407 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:49.407 17:52:05 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:49.407 17:52:05 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:49.407 17:52:05 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:49.407 17:52:05 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:49.407 17:52:05 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:49.407 17:52:05 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:49.407 17:52:05 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:49.407 17:52:05 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:49.407 17:52:05 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:49.407 17:52:05 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:49.407 17:52:05 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:49.407 17:52:05 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=0 00:01:49.407 17:52:05 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:49.407 17:52:05 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:49.407 17:52:05 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:01:49.407 17:52:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:49.407 17:52:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:49.407 17:52:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:49.407 17:52:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:49.407 17:52:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:49.407 17:52:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.407 17:52:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.407 17:52:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.407 17:52:05 -- paths/export.sh@5 -- $ export PATH 00:01:49.407 17:52:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:49.407 17:52:05 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:49.407 17:52:05 -- common/autobuild_common.sh@486 -- $ date +%s 00:01:49.407 17:52:05 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1730137925.XXXXXX 00:01:49.407 17:52:05 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1730137925.s4ecnC 00:01:49.407 17:52:05 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:01:49.407 17:52:05 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:01:49.407 17:52:05 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:49.407 17:52:05 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:49.407 17:52:05 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:49.407 17:52:05 -- common/autobuild_common.sh@502 -- $ get_config_params 00:01:49.407 17:52:05 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:01:49.407 17:52:05 -- common/autotest_common.sh@10 -- $ set +x 00:01:49.665 17:52:05 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:49.665 17:52:05 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:01:49.665 17:52:05 -- pm/common@17 -- $ local monitor 00:01:49.665 17:52:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.665 17:52:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:49.665 17:52:05 -- pm/common@25 -- $ sleep 1 00:01:49.665 17:52:05 -- pm/common@21 -- $ date +%s 00:01:49.665 17:52:05 -- pm/common@21 -- $ date +%s 00:01:49.665 17:52:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730137925 00:01:49.665 17:52:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1730137925 00:01:49.665 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730137925_collect-cpu-load.pm.log 00:01:49.665 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1730137925_collect-vmstat.pm.log 00:01:50.599 17:52:06 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:01:50.599 17:52:06 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:50.599 17:52:06 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:50.599 17:52:06 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:50.599 17:52:06 -- spdk/autobuild.sh@16 -- $ date -u 00:01:50.599 Mon Oct 28 05:52:06 PM UTC 2024 00:01:50.599 17:52:06 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:50.599 v25.01-pre-140-gd490b5576 00:01:50.599 17:52:06 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:50.599 17:52:06 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:50.599 17:52:06 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:50.599 17:52:06 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:50.599 17:52:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.599 ************************************ 00:01:50.599 START TEST asan 00:01:50.599 ************************************ 00:01:50.599 using asan 00:01:50.599 17:52:06 asan -- common/autotest_common.sh@1127 -- $ echo 'using asan' 00:01:50.599 00:01:50.599 real 0m0.000s 00:01:50.599 user 0m0.000s 00:01:50.599 sys 0m0.000s 00:01:50.599 17:52:06 asan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:50.599 17:52:06 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:50.599 ************************************ 00:01:50.599 END TEST asan 00:01:50.599 ************************************ 00:01:50.599 17:52:06 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:50.599 17:52:06 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:50.599 17:52:06 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:01:50.599 17:52:06 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:01:50.599 17:52:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:50.599 ************************************ 00:01:50.599 START TEST ubsan 00:01:50.599 ************************************ 00:01:50.599 17:52:06 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:01:50.599 using ubsan 00:01:50.599 00:01:50.599 real 0m0.000s 00:01:50.599 user 0m0.000s 00:01:50.599 sys 0m0.000s 00:01:50.599 ************************************ 00:01:50.599 END TEST ubsan 00:01:50.599 ************************************ 00:01:50.599 17:52:06 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:01:50.599 17:52:06 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:50.599 17:52:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:50.599 17:52:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:50.599 17:52:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:50.599 17:52:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:50.599 17:52:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:50.599 17:52:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:50.599 17:52:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:50.599 17:52:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:50.599 17:52:07 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:50.857 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:50.858 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:51.115 Using 'verbs' RDMA provider 00:02:06.920 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:19.121 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:19.121 Creating mk/config.mk...done. 00:02:19.121 Creating mk/cc.flags.mk...done. 00:02:19.121 Type 'make' to build. 00:02:19.121 17:52:34 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:19.121 17:52:34 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:02:19.121 17:52:34 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:02:19.121 17:52:34 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.121 ************************************ 00:02:19.121 START TEST make 00:02:19.121 ************************************ 00:02:19.121 17:52:34 make -- common/autotest_common.sh@1127 -- $ make -j10 00:02:19.121 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:19.121 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:19.121 meson setup builddir \ 00:02:19.121 -Dwith-libaio=enabled \ 00:02:19.121 -Dwith-liburing=enabled \ 00:02:19.121 -Dwith-libvfn=disabled \ 00:02:19.121 -Dwith-spdk=disabled \ 00:02:19.121 -Dexamples=false \ 00:02:19.121 -Dtests=false \ 00:02:19.121 -Dtools=false && \ 00:02:19.121 meson compile -C builddir && \ 00:02:19.121 cd -) 00:02:19.121 make[1]: Nothing to be done for 'all'. 00:02:21.649 The Meson build system 00:02:21.649 Version: 1.5.0 00:02:21.649 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:21.649 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:21.649 Build type: native build 00:02:21.649 Project name: xnvme 00:02:21.649 Project version: 0.7.5 00:02:21.649 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.649 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.649 Host machine cpu family: x86_64 00:02:21.649 Host machine cpu: x86_64 00:02:21.649 Message: host_machine.system: linux 00:02:21.649 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:21.650 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:21.650 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:21.650 Run-time dependency threads found: YES 00:02:21.650 Has header "setupapi.h" : NO 00:02:21.650 Has header "linux/blkzoned.h" : YES 00:02:21.650 Has header "linux/blkzoned.h" : YES (cached) 00:02:21.650 Has header "libaio.h" : YES 00:02:21.650 Library aio found: YES 00:02:21.650 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.650 Run-time dependency liburing found: YES 2.2 00:02:21.650 Dependency libvfn skipped: feature with-libvfn disabled 00:02:21.650 Found CMake: /usr/bin/cmake (3.27.7) 00:02:21.650 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:21.650 Subproject spdk : skipped: feature with-spdk disabled 00:02:21.650 Run-time dependency appleframeworks found: NO (tried framework) 00:02:21.650 Run-time dependency appleframeworks found: NO (tried framework) 00:02:21.650 Library rt found: YES 00:02:21.650 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:21.650 Configuring xnvme_config.h using configuration 00:02:21.650 Configuring xnvme.spec using configuration 00:02:21.650 Run-time dependency bash-completion found: YES 2.11 00:02:21.650 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:21.650 Program cp found: YES (/usr/bin/cp) 00:02:21.650 Build targets in project: 3 00:02:21.650 00:02:21.650 xnvme 0.7.5 00:02:21.650 00:02:21.650 Subprojects 00:02:21.650 spdk : NO Feature 'with-spdk' disabled 00:02:21.650 00:02:21.650 User defined options 00:02:21.650 examples : false 00:02:21.650 tests : false 00:02:21.650 tools : false 00:02:21.650 with-libaio : enabled 00:02:21.650 with-liburing: enabled 00:02:21.650 with-libvfn : disabled 00:02:21.650 with-spdk : disabled 00:02:21.650 00:02:21.650 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.215 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:22.215 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:22.215 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:22.215 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:22.215 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:22.215 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:22.473 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:22.473 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:22.473 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:22.473 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:22.473 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:22.473 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:22.473 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:22.473 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:22.473 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:22.473 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:22.473 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:22.473 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:22.473 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:22.473 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:22.473 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:22.473 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:22.473 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:22.730 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:22.730 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:22.730 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:22.730 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:22.730 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:22.730 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:22.730 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:22.730 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:22.731 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:22.731 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:22.731 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:22.731 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:22.731 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:22.731 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:22.731 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:22.731 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:22.731 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:22.731 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:22.731 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:22.731 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:22.731 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:22.731 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:22.731 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:22.731 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:22.731 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:22.731 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:22.731 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:23.015 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:23.015 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:23.015 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:23.015 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:23.015 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:23.015 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:23.015 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:23.015 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:23.015 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:23.015 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:23.015 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:23.015 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:23.015 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:23.015 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:23.283 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:23.283 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:23.283 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:23.283 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:23.283 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:23.283 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:23.283 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:23.284 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:23.284 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:23.541 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:24.106 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:24.106 [75/76] Linking static target lib/libxnvme.a 00:02:24.106 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:24.106 INFO: autodetecting backend as ninja 00:02:24.106 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:24.106 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:34.072 The Meson build system 00:02:34.072 Version: 1.5.0 00:02:34.072 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:34.072 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:34.072 Build type: native build 00:02:34.072 Program cat found: YES (/usr/bin/cat) 00:02:34.072 Project name: DPDK 00:02:34.072 Project version: 24.03.0 00:02:34.072 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:34.072 C linker for the host machine: cc ld.bfd 2.40-14 00:02:34.072 Host machine cpu family: x86_64 00:02:34.072 Host machine cpu: x86_64 00:02:34.072 Message: ## Building in Developer Mode ## 00:02:34.072 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:34.072 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:34.072 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:34.072 Program python3 found: YES (/usr/bin/python3) 00:02:34.072 Program cat found: YES (/usr/bin/cat) 00:02:34.072 Compiler for C supports arguments -march=native: YES 00:02:34.072 Checking for size of "void *" : 8 00:02:34.072 Checking for size of "void *" : 8 (cached) 00:02:34.072 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:34.072 Library m found: YES 00:02:34.072 Library numa found: YES 00:02:34.072 Has header "numaif.h" : YES 00:02:34.072 Library fdt found: NO 00:02:34.072 Library execinfo found: NO 00:02:34.072 Has header "execinfo.h" : YES 00:02:34.072 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:34.072 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:34.073 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:34.073 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:34.073 Run-time dependency openssl found: YES 3.1.1 00:02:34.073 Run-time dependency libpcap found: YES 1.10.4 00:02:34.073 Has header "pcap.h" with dependency libpcap: YES 00:02:34.073 Compiler for C supports arguments -Wcast-qual: YES 00:02:34.073 Compiler for C supports arguments -Wdeprecated: YES 00:02:34.073 Compiler for C supports arguments -Wformat: YES 00:02:34.073 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:34.073 Compiler for C supports arguments -Wformat-security: NO 00:02:34.073 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:34.073 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:34.073 Compiler for C supports arguments -Wnested-externs: YES 00:02:34.073 Compiler for C supports arguments -Wold-style-definition: YES 00:02:34.073 Compiler for C supports arguments -Wpointer-arith: YES 00:02:34.073 Compiler for C supports arguments -Wsign-compare: YES 00:02:34.073 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:34.073 Compiler for C supports arguments -Wundef: YES 00:02:34.073 Compiler for C supports arguments -Wwrite-strings: YES 00:02:34.073 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:34.073 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:34.073 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:34.073 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:34.073 Program objdump found: YES (/usr/bin/objdump) 00:02:34.073 Compiler for C supports arguments -mavx512f: YES 00:02:34.073 Checking if "AVX512 checking" compiles: YES 00:02:34.073 Fetching value of define "__SSE4_2__" : 1 00:02:34.073 Fetching value of define "__AES__" : 1 00:02:34.073 Fetching value of define "__AVX__" : 1 00:02:34.073 Fetching value of define "__AVX2__" : 1 00:02:34.073 Fetching value of define "__AVX512BW__" : (undefined) 00:02:34.073 Fetching value of define "__AVX512CD__" : (undefined) 00:02:34.073 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:34.073 Fetching value of define "__AVX512F__" : (undefined) 00:02:34.073 Fetching value of define "__AVX512VL__" : (undefined) 00:02:34.073 Fetching value of define "__PCLMUL__" : 1 00:02:34.073 Fetching value of define "__RDRND__" : 1 00:02:34.073 Fetching value of define "__RDSEED__" : 1 00:02:34.073 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:34.073 Fetching value of define "__znver1__" : (undefined) 00:02:34.073 Fetching value of define "__znver2__" : (undefined) 00:02:34.073 Fetching value of define "__znver3__" : (undefined) 00:02:34.073 Fetching value of define "__znver4__" : (undefined) 00:02:34.073 Library asan found: YES 00:02:34.073 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:34.073 Message: lib/log: Defining dependency "log" 00:02:34.073 Message: lib/kvargs: Defining dependency "kvargs" 00:02:34.073 Message: lib/telemetry: Defining dependency "telemetry" 00:02:34.073 Library rt found: YES 00:02:34.073 Checking for function "getentropy" : NO 00:02:34.073 Message: lib/eal: Defining dependency "eal" 00:02:34.073 Message: lib/ring: Defining dependency "ring" 00:02:34.073 Message: lib/rcu: Defining dependency "rcu" 00:02:34.073 Message: lib/mempool: Defining dependency "mempool" 00:02:34.073 Message: lib/mbuf: Defining dependency "mbuf" 00:02:34.073 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:34.073 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:34.073 Compiler for C supports arguments -mpclmul: YES 00:02:34.073 Compiler for C supports arguments -maes: YES 00:02:34.073 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:34.073 Compiler for C supports arguments -mavx512bw: YES 00:02:34.073 Compiler for C supports arguments -mavx512dq: YES 00:02:34.073 Compiler for C supports arguments -mavx512vl: YES 00:02:34.073 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:34.073 Compiler for C supports arguments -mavx2: YES 00:02:34.073 Compiler for C supports arguments -mavx: YES 00:02:34.073 Message: lib/net: Defining dependency "net" 00:02:34.073 Message: lib/meter: Defining dependency "meter" 00:02:34.073 Message: lib/ethdev: Defining dependency "ethdev" 00:02:34.073 Message: lib/pci: Defining dependency "pci" 00:02:34.073 Message: lib/cmdline: Defining dependency "cmdline" 00:02:34.073 Message: lib/hash: Defining dependency "hash" 00:02:34.073 Message: lib/timer: Defining dependency "timer" 00:02:34.073 Message: lib/compressdev: Defining dependency "compressdev" 00:02:34.073 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:34.073 Message: lib/dmadev: Defining dependency "dmadev" 00:02:34.073 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:34.073 Message: lib/power: Defining dependency "power" 00:02:34.073 Message: lib/reorder: Defining dependency "reorder" 00:02:34.073 Message: lib/security: Defining dependency "security" 00:02:34.073 Has header "linux/userfaultfd.h" : YES 00:02:34.073 Has header "linux/vduse.h" : YES 00:02:34.073 Message: lib/vhost: Defining dependency "vhost" 00:02:34.073 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:34.073 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:34.073 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:34.073 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:34.073 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:34.073 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:34.073 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:34.073 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:34.073 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:34.073 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:34.073 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:34.073 Configuring doxy-api-html.conf using configuration 00:02:34.073 Configuring doxy-api-man.conf using configuration 00:02:34.073 Program mandb found: YES (/usr/bin/mandb) 00:02:34.073 Program sphinx-build found: NO 00:02:34.073 Configuring rte_build_config.h using configuration 00:02:34.073 Message: 00:02:34.073 ================= 00:02:34.073 Applications Enabled 00:02:34.073 ================= 00:02:34.073 00:02:34.073 apps: 00:02:34.073 00:02:34.073 00:02:34.073 Message: 00:02:34.073 ================= 00:02:34.073 Libraries Enabled 00:02:34.073 ================= 00:02:34.073 00:02:34.073 libs: 00:02:34.073 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:34.073 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:34.073 cryptodev, dmadev, power, reorder, security, vhost, 00:02:34.073 00:02:34.073 Message: 00:02:34.073 =============== 00:02:34.073 Drivers Enabled 00:02:34.073 =============== 00:02:34.073 00:02:34.073 common: 00:02:34.073 00:02:34.073 bus: 00:02:34.073 pci, vdev, 00:02:34.073 mempool: 00:02:34.073 ring, 00:02:34.073 dma: 00:02:34.073 00:02:34.073 net: 00:02:34.073 00:02:34.073 crypto: 00:02:34.073 00:02:34.073 compress: 00:02:34.073 00:02:34.073 vdpa: 00:02:34.073 00:02:34.073 00:02:34.073 Message: 00:02:34.073 ================= 00:02:34.073 Content Skipped 00:02:34.073 ================= 00:02:34.073 00:02:34.073 apps: 00:02:34.073 dumpcap: explicitly disabled via build config 00:02:34.073 graph: explicitly disabled via build config 00:02:34.073 pdump: explicitly disabled via build config 00:02:34.073 proc-info: explicitly disabled via build config 00:02:34.073 test-acl: explicitly disabled via build config 00:02:34.073 test-bbdev: explicitly disabled via build config 00:02:34.073 test-cmdline: explicitly disabled via build config 00:02:34.073 test-compress-perf: explicitly disabled via build config 00:02:34.073 test-crypto-perf: explicitly disabled via build config 00:02:34.073 test-dma-perf: explicitly disabled via build config 00:02:34.073 test-eventdev: explicitly disabled via build config 00:02:34.073 test-fib: explicitly disabled via build config 00:02:34.073 test-flow-perf: explicitly disabled via build config 00:02:34.073 test-gpudev: explicitly disabled via build config 00:02:34.073 test-mldev: explicitly disabled via build config 00:02:34.073 test-pipeline: explicitly disabled via build config 00:02:34.073 test-pmd: explicitly disabled via build config 00:02:34.073 test-regex: explicitly disabled via build config 00:02:34.074 test-sad: explicitly disabled via build config 00:02:34.074 test-security-perf: explicitly disabled via build config 00:02:34.074 00:02:34.074 libs: 00:02:34.074 argparse: explicitly disabled via build config 00:02:34.074 metrics: explicitly disabled via build config 00:02:34.074 acl: explicitly disabled via build config 00:02:34.074 bbdev: explicitly disabled via build config 00:02:34.074 bitratestats: explicitly disabled via build config 00:02:34.074 bpf: explicitly disabled via build config 00:02:34.074 cfgfile: explicitly disabled via build config 00:02:34.074 distributor: explicitly disabled via build config 00:02:34.074 efd: explicitly disabled via build config 00:02:34.074 eventdev: explicitly disabled via build config 00:02:34.074 dispatcher: explicitly disabled via build config 00:02:34.074 gpudev: explicitly disabled via build config 00:02:34.074 gro: explicitly disabled via build config 00:02:34.074 gso: explicitly disabled via build config 00:02:34.074 ip_frag: explicitly disabled via build config 00:02:34.074 jobstats: explicitly disabled via build config 00:02:34.074 latencystats: explicitly disabled via build config 00:02:34.074 lpm: explicitly disabled via build config 00:02:34.074 member: explicitly disabled via build config 00:02:34.074 pcapng: explicitly disabled via build config 00:02:34.074 rawdev: explicitly disabled via build config 00:02:34.074 regexdev: explicitly disabled via build config 00:02:34.074 mldev: explicitly disabled via build config 00:02:34.074 rib: explicitly disabled via build config 00:02:34.074 sched: explicitly disabled via build config 00:02:34.074 stack: explicitly disabled via build config 00:02:34.074 ipsec: explicitly disabled via build config 00:02:34.074 pdcp: explicitly disabled via build config 00:02:34.074 fib: explicitly disabled via build config 00:02:34.074 port: explicitly disabled via build config 00:02:34.074 pdump: explicitly disabled via build config 00:02:34.074 table: explicitly disabled via build config 00:02:34.074 pipeline: explicitly disabled via build config 00:02:34.074 graph: explicitly disabled via build config 00:02:34.074 node: explicitly disabled via build config 00:02:34.074 00:02:34.074 drivers: 00:02:34.074 common/cpt: not in enabled drivers build config 00:02:34.074 common/dpaax: not in enabled drivers build config 00:02:34.074 common/iavf: not in enabled drivers build config 00:02:34.074 common/idpf: not in enabled drivers build config 00:02:34.074 common/ionic: not in enabled drivers build config 00:02:34.074 common/mvep: not in enabled drivers build config 00:02:34.074 common/octeontx: not in enabled drivers build config 00:02:34.074 bus/auxiliary: not in enabled drivers build config 00:02:34.074 bus/cdx: not in enabled drivers build config 00:02:34.074 bus/dpaa: not in enabled drivers build config 00:02:34.074 bus/fslmc: not in enabled drivers build config 00:02:34.074 bus/ifpga: not in enabled drivers build config 00:02:34.074 bus/platform: not in enabled drivers build config 00:02:34.074 bus/uacce: not in enabled drivers build config 00:02:34.074 bus/vmbus: not in enabled drivers build config 00:02:34.074 common/cnxk: not in enabled drivers build config 00:02:34.074 common/mlx5: not in enabled drivers build config 00:02:34.074 common/nfp: not in enabled drivers build config 00:02:34.074 common/nitrox: not in enabled drivers build config 00:02:34.074 common/qat: not in enabled drivers build config 00:02:34.074 common/sfc_efx: not in enabled drivers build config 00:02:34.074 mempool/bucket: not in enabled drivers build config 00:02:34.074 mempool/cnxk: not in enabled drivers build config 00:02:34.074 mempool/dpaa: not in enabled drivers build config 00:02:34.074 mempool/dpaa2: not in enabled drivers build config 00:02:34.074 mempool/octeontx: not in enabled drivers build config 00:02:34.074 mempool/stack: not in enabled drivers build config 00:02:34.074 dma/cnxk: not in enabled drivers build config 00:02:34.074 dma/dpaa: not in enabled drivers build config 00:02:34.074 dma/dpaa2: not in enabled drivers build config 00:02:34.074 dma/hisilicon: not in enabled drivers build config 00:02:34.074 dma/idxd: not in enabled drivers build config 00:02:34.074 dma/ioat: not in enabled drivers build config 00:02:34.074 dma/skeleton: not in enabled drivers build config 00:02:34.074 net/af_packet: not in enabled drivers build config 00:02:34.074 net/af_xdp: not in enabled drivers build config 00:02:34.074 net/ark: not in enabled drivers build config 00:02:34.074 net/atlantic: not in enabled drivers build config 00:02:34.074 net/avp: not in enabled drivers build config 00:02:34.074 net/axgbe: not in enabled drivers build config 00:02:34.074 net/bnx2x: not in enabled drivers build config 00:02:34.074 net/bnxt: not in enabled drivers build config 00:02:34.074 net/bonding: not in enabled drivers build config 00:02:34.074 net/cnxk: not in enabled drivers build config 00:02:34.074 net/cpfl: not in enabled drivers build config 00:02:34.074 net/cxgbe: not in enabled drivers build config 00:02:34.074 net/dpaa: not in enabled drivers build config 00:02:34.074 net/dpaa2: not in enabled drivers build config 00:02:34.074 net/e1000: not in enabled drivers build config 00:02:34.074 net/ena: not in enabled drivers build config 00:02:34.074 net/enetc: not in enabled drivers build config 00:02:34.074 net/enetfec: not in enabled drivers build config 00:02:34.074 net/enic: not in enabled drivers build config 00:02:34.074 net/failsafe: not in enabled drivers build config 00:02:34.074 net/fm10k: not in enabled drivers build config 00:02:34.074 net/gve: not in enabled drivers build config 00:02:34.074 net/hinic: not in enabled drivers build config 00:02:34.074 net/hns3: not in enabled drivers build config 00:02:34.074 net/i40e: not in enabled drivers build config 00:02:34.074 net/iavf: not in enabled drivers build config 00:02:34.074 net/ice: not in enabled drivers build config 00:02:34.074 net/idpf: not in enabled drivers build config 00:02:34.074 net/igc: not in enabled drivers build config 00:02:34.074 net/ionic: not in enabled drivers build config 00:02:34.074 net/ipn3ke: not in enabled drivers build config 00:02:34.074 net/ixgbe: not in enabled drivers build config 00:02:34.074 net/mana: not in enabled drivers build config 00:02:34.074 net/memif: not in enabled drivers build config 00:02:34.074 net/mlx4: not in enabled drivers build config 00:02:34.074 net/mlx5: not in enabled drivers build config 00:02:34.074 net/mvneta: not in enabled drivers build config 00:02:34.074 net/mvpp2: not in enabled drivers build config 00:02:34.074 net/netvsc: not in enabled drivers build config 00:02:34.074 net/nfb: not in enabled drivers build config 00:02:34.074 net/nfp: not in enabled drivers build config 00:02:34.074 net/ngbe: not in enabled drivers build config 00:02:34.074 net/null: not in enabled drivers build config 00:02:34.074 net/octeontx: not in enabled drivers build config 00:02:34.074 net/octeon_ep: not in enabled drivers build config 00:02:34.074 net/pcap: not in enabled drivers build config 00:02:34.074 net/pfe: not in enabled drivers build config 00:02:34.074 net/qede: not in enabled drivers build config 00:02:34.074 net/ring: not in enabled drivers build config 00:02:34.074 net/sfc: not in enabled drivers build config 00:02:34.074 net/softnic: not in enabled drivers build config 00:02:34.074 net/tap: not in enabled drivers build config 00:02:34.074 net/thunderx: not in enabled drivers build config 00:02:34.074 net/txgbe: not in enabled drivers build config 00:02:34.074 net/vdev_netvsc: not in enabled drivers build config 00:02:34.074 net/vhost: not in enabled drivers build config 00:02:34.074 net/virtio: not in enabled drivers build config 00:02:34.074 net/vmxnet3: not in enabled drivers build config 00:02:34.074 raw/*: missing internal dependency, "rawdev" 00:02:34.074 crypto/armv8: not in enabled drivers build config 00:02:34.074 crypto/bcmfs: not in enabled drivers build config 00:02:34.074 crypto/caam_jr: not in enabled drivers build config 00:02:34.074 crypto/ccp: not in enabled drivers build config 00:02:34.074 crypto/cnxk: not in enabled drivers build config 00:02:34.074 crypto/dpaa_sec: not in enabled drivers build config 00:02:34.074 crypto/dpaa2_sec: not in enabled drivers build config 00:02:34.074 crypto/ipsec_mb: not in enabled drivers build config 00:02:34.074 crypto/mlx5: not in enabled drivers build config 00:02:34.074 crypto/mvsam: not in enabled drivers build config 00:02:34.074 crypto/nitrox: not in enabled drivers build config 00:02:34.074 crypto/null: not in enabled drivers build config 00:02:34.074 crypto/octeontx: not in enabled drivers build config 00:02:34.074 crypto/openssl: not in enabled drivers build config 00:02:34.074 crypto/scheduler: not in enabled drivers build config 00:02:34.074 crypto/uadk: not in enabled drivers build config 00:02:34.074 crypto/virtio: not in enabled drivers build config 00:02:34.074 compress/isal: not in enabled drivers build config 00:02:34.074 compress/mlx5: not in enabled drivers build config 00:02:34.074 compress/nitrox: not in enabled drivers build config 00:02:34.074 compress/octeontx: not in enabled drivers build config 00:02:34.074 compress/zlib: not in enabled drivers build config 00:02:34.074 regex/*: missing internal dependency, "regexdev" 00:02:34.074 ml/*: missing internal dependency, "mldev" 00:02:34.075 vdpa/ifc: not in enabled drivers build config 00:02:34.075 vdpa/mlx5: not in enabled drivers build config 00:02:34.075 vdpa/nfp: not in enabled drivers build config 00:02:34.075 vdpa/sfc: not in enabled drivers build config 00:02:34.075 event/*: missing internal dependency, "eventdev" 00:02:34.075 baseband/*: missing internal dependency, "bbdev" 00:02:34.075 gpu/*: missing internal dependency, "gpudev" 00:02:34.075 00:02:34.075 00:02:34.640 Build targets in project: 85 00:02:34.640 00:02:34.640 DPDK 24.03.0 00:02:34.640 00:02:34.640 User defined options 00:02:34.640 buildtype : debug 00:02:34.640 default_library : shared 00:02:34.641 libdir : lib 00:02:34.641 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:34.641 b_sanitize : address 00:02:34.641 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:34.641 c_link_args : 00:02:34.641 cpu_instruction_set: native 00:02:34.641 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:34.641 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:34.641 enable_docs : false 00:02:34.641 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:34.641 enable_kmods : false 00:02:34.641 max_lcores : 128 00:02:34.641 tests : false 00:02:34.641 00:02:34.641 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:35.573 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:35.573 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:35.573 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:35.573 [3/268] Linking static target lib/librte_kvargs.a 00:02:35.846 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:35.846 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:35.846 [6/268] Linking static target lib/librte_log.a 00:02:36.124 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.382 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:36.382 [9/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:36.639 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:36.639 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:36.639 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:36.639 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:36.639 [14/268] Linking static target lib/librte_telemetry.a 00:02:36.897 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:36.897 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.155 [17/268] Linking target lib/librte_log.so.24.1 00:02:37.155 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:37.155 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:37.155 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:37.412 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:37.412 [22/268] Linking target lib/librte_kvargs.so.24.1 00:02:37.412 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:37.670 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:37.670 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:37.670 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:37.670 [27/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:37.928 [28/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.928 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:37.928 [30/268] Linking target lib/librte_telemetry.so.24.1 00:02:38.186 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:38.186 [32/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:38.443 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:38.443 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:38.443 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:38.701 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:38.959 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:38.959 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:38.959 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:38.959 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:38.959 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:39.216 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:39.216 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:39.475 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:39.475 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:39.733 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:39.991 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:39.991 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:39.991 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.248 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.249 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:40.249 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:40.815 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:40.815 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.815 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:40.815 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.815 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:41.073 [58/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:41.331 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:41.331 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:41.331 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:41.589 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:41.589 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:41.846 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:41.846 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:41.847 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:42.104 [67/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:42.104 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:42.363 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:42.363 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:42.363 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:42.363 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:42.621 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:42.621 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:42.621 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:42.879 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:42.879 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:42.879 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:43.137 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:43.137 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:43.137 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:43.137 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:43.394 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:43.394 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:43.394 [85/268] Linking static target lib/librte_ring.a 00:02:43.652 [86/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:43.909 [87/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:43.909 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:43.909 [89/268] Linking static target lib/librte_eal.a 00:02:43.909 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:43.909 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:44.167 [92/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.167 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:44.167 [94/268] Linking static target lib/librte_mempool.a 00:02:44.167 [95/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:44.424 [96/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:44.424 [97/268] Linking static target lib/librte_rcu.a 00:02:44.682 [98/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:44.682 [99/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:44.939 [100/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:44.939 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:44.939 [102/268] Linking static target lib/librte_mbuf.a 00:02:44.939 [103/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:44.939 [104/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:44.939 [105/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.196 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:45.196 [107/268] Linking static target lib/librte_meter.a 00:02:45.464 [108/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:45.464 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:45.721 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:45.721 [111/268] Linking static target lib/librte_net.a 00:02:45.721 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.721 [113/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.980 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:46.237 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:46.237 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:46.237 [117/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.237 [118/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.802 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:47.060 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:47.317 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:47.575 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:47.575 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:47.833 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:47.833 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:48.091 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:48.091 [127/268] Linking static target lib/librte_pci.a 00:02:48.091 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:48.091 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:48.350 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:48.350 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:48.350 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:48.350 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:48.350 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:48.607 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:48.607 [136/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:48.607 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:48.607 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:48.607 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:48.607 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:48.607 [141/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.865 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:48.865 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:48.865 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:48.865 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:48.865 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:49.122 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:49.122 [148/268] Linking static target lib/librte_cmdline.a 00:02:49.690 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:49.690 [150/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:49.690 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:49.690 [152/268] Linking static target lib/librte_timer.a 00:02:49.950 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:49.950 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:50.211 [155/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:50.472 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:50.472 [157/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.472 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:50.472 [159/268] Linking static target lib/librte_hash.a 00:02:50.733 [160/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:50.733 [161/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:50.733 [162/268] Linking static target lib/librte_compressdev.a 00:02:50.733 [163/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:50.733 [164/268] Linking static target lib/librte_ethdev.a 00:02:50.995 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:50.995 [166/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.257 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:51.257 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:51.257 [169/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:51.257 [170/268] Linking static target lib/librte_dmadev.a 00:02:51.257 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:51.518 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:51.776 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:51.776 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.034 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:52.034 [176/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.034 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:52.292 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:52.292 [179/268] Linking static target lib/librte_cryptodev.a 00:02:52.292 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.292 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:52.292 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:52.550 [183/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:52.550 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:53.116 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:53.116 [186/268] Linking static target lib/librte_power.a 00:02:53.116 [187/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:53.116 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:53.116 [189/268] Linking static target lib/librte_security.a 00:02:53.116 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:53.116 [191/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:53.116 [192/268] Linking static target lib/librte_reorder.a 00:02:53.679 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:53.679 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:53.946 [195/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.946 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.203 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.461 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:54.461 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:54.461 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:54.718 [201/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.719 [202/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:54.719 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:54.975 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:54.975 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:55.232 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:55.232 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:55.490 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:55.490 [209/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:55.490 [210/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:55.490 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:55.748 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:55.748 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:55.748 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.006 [215/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:56.006 [216/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.006 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:56.006 [218/268] Linking static target drivers/librte_bus_vdev.a 00:02:56.006 [219/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.006 [220/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.006 [221/268] Linking static target drivers/librte_bus_pci.a 00:02:56.006 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:56.006 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:56.006 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:56.263 [225/268] Linking static target drivers/librte_mempool_ring.a 00:02:56.263 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.521 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.087 [228/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.087 [229/268] Linking target lib/librte_eal.so.24.1 00:02:57.345 [230/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:57.345 [231/268] Linking target lib/librte_timer.so.24.1 00:02:57.345 [232/268] Linking target lib/librte_pci.so.24.1 00:02:57.345 [233/268] Linking target lib/librte_ring.so.24.1 00:02:57.345 [234/268] Linking target lib/librte_dmadev.so.24.1 00:02:57.345 [235/268] Linking target lib/librte_meter.so.24.1 00:02:57.345 [236/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:57.345 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:57.602 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:57.602 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:57.602 [240/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:57.602 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:57.602 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:57.602 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:57.602 [244/268] Linking target lib/librte_mempool.so.24.1 00:02:57.602 [245/268] Linking target lib/librte_rcu.so.24.1 00:02:57.859 [246/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:57.859 [247/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:57.859 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:57.859 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:57.859 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:57.859 [251/268] Linking target lib/librte_reorder.so.24.1 00:02:57.859 [252/268] Linking target lib/librte_net.so.24.1 00:02:58.117 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:02:58.117 [254/268] Linking target lib/librte_compressdev.so.24.1 00:02:58.117 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:58.117 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:58.117 [257/268] Linking target lib/librte_hash.so.24.1 00:02:58.117 [258/268] Linking target lib/librte_cmdline.so.24.1 00:02:58.117 [259/268] Linking target lib/librte_security.so.24.1 00:02:58.374 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:58.941 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.941 [262/268] Linking target lib/librte_ethdev.so.24.1 00:02:59.199 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:59.199 [264/268] Linking target lib/librte_power.so.24.1 00:03:02.478 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.478 [266/268] Linking static target lib/librte_vhost.a 00:03:03.411 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.668 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:03.668 INFO: autodetecting backend as ninja 00:03:03.668 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:25.588 CC lib/ut_mock/mock.o 00:03:25.588 CC lib/log/log_flags.o 00:03:25.588 CC lib/log/log.o 00:03:25.588 CC lib/log/log_deprecated.o 00:03:25.588 CC lib/ut/ut.o 00:03:25.588 LIB libspdk_ut.a 00:03:25.588 LIB libspdk_ut_mock.a 00:03:25.588 LIB libspdk_log.a 00:03:25.588 SO libspdk_ut.so.2.0 00:03:25.588 SO libspdk_ut_mock.so.6.0 00:03:25.588 SO libspdk_log.so.7.1 00:03:25.588 SYMLINK libspdk_ut_mock.so 00:03:25.588 SYMLINK libspdk_ut.so 00:03:25.588 SYMLINK libspdk_log.so 00:03:25.588 CC lib/ioat/ioat.o 00:03:25.588 CXX lib/trace_parser/trace.o 00:03:25.588 CC lib/dma/dma.o 00:03:25.588 CC lib/util/base64.o 00:03:25.588 CC lib/util/bit_array.o 00:03:25.588 CC lib/util/cpuset.o 00:03:25.588 CC lib/util/crc16.o 00:03:25.588 CC lib/util/crc32.o 00:03:25.588 CC lib/util/crc32c.o 00:03:25.588 CC lib/vfio_user/host/vfio_user_pci.o 00:03:25.588 CC lib/util/crc32_ieee.o 00:03:25.588 CC lib/util/crc64.o 00:03:25.588 CC lib/vfio_user/host/vfio_user.o 00:03:25.588 CC lib/util/dif.o 00:03:25.588 CC lib/util/fd.o 00:03:25.588 LIB libspdk_ioat.a 00:03:25.588 LIB libspdk_dma.a 00:03:25.588 SO libspdk_ioat.so.7.0 00:03:25.588 CC lib/util/fd_group.o 00:03:25.588 CC lib/util/file.o 00:03:25.588 SO libspdk_dma.so.5.0 00:03:25.588 SYMLINK libspdk_ioat.so 00:03:25.588 CC lib/util/hexlify.o 00:03:25.588 CC lib/util/iov.o 00:03:25.588 SYMLINK libspdk_dma.so 00:03:25.588 CC lib/util/math.o 00:03:25.588 CC lib/util/net.o 00:03:25.588 CC lib/util/pipe.o 00:03:25.588 LIB libspdk_vfio_user.a 00:03:25.588 SO libspdk_vfio_user.so.5.0 00:03:25.588 CC lib/util/strerror_tls.o 00:03:25.588 CC lib/util/string.o 00:03:25.588 CC lib/util/uuid.o 00:03:25.588 SYMLINK libspdk_vfio_user.so 00:03:25.588 CC lib/util/xor.o 00:03:25.588 CC lib/util/zipf.o 00:03:25.588 CC lib/util/md5.o 00:03:25.588 LIB libspdk_util.a 00:03:25.588 SO libspdk_util.so.10.1 00:03:25.588 SYMLINK libspdk_util.so 00:03:25.588 LIB libspdk_trace_parser.a 00:03:25.588 SO libspdk_trace_parser.so.6.0 00:03:25.588 CC lib/idxd/idxd_kernel.o 00:03:25.588 CC lib/idxd/idxd.o 00:03:25.588 CC lib/idxd/idxd_user.o 00:03:25.588 CC lib/conf/conf.o 00:03:25.588 CC lib/env_dpdk/env.o 00:03:25.588 CC lib/json/json_parse.o 00:03:25.588 CC lib/vmd/vmd.o 00:03:25.588 CC lib/rdma_provider/common.o 00:03:25.588 CC lib/rdma_utils/rdma_utils.o 00:03:25.588 SYMLINK libspdk_trace_parser.so 00:03:25.588 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:25.588 CC lib/json/json_util.o 00:03:25.588 CC lib/json/json_write.o 00:03:25.588 LIB libspdk_rdma_provider.a 00:03:25.588 SO libspdk_rdma_provider.so.6.0 00:03:25.588 LIB libspdk_conf.a 00:03:25.588 SYMLINK libspdk_rdma_provider.so 00:03:25.588 CC lib/vmd/led.o 00:03:25.588 SO libspdk_conf.so.6.0 00:03:25.588 CC lib/env_dpdk/memory.o 00:03:25.588 CC lib/env_dpdk/pci.o 00:03:25.588 LIB libspdk_rdma_utils.a 00:03:25.588 SYMLINK libspdk_conf.so 00:03:25.588 SO libspdk_rdma_utils.so.1.0 00:03:25.589 CC lib/env_dpdk/init.o 00:03:25.589 CC lib/env_dpdk/threads.o 00:03:25.589 CC lib/env_dpdk/pci_ioat.o 00:03:25.589 SYMLINK libspdk_rdma_utils.so 00:03:25.589 CC lib/env_dpdk/pci_virtio.o 00:03:25.589 LIB libspdk_json.a 00:03:25.589 SO libspdk_json.so.6.0 00:03:25.589 CC lib/env_dpdk/pci_vmd.o 00:03:25.589 CC lib/env_dpdk/pci_idxd.o 00:03:25.589 SYMLINK libspdk_json.so 00:03:25.589 CC lib/env_dpdk/pci_event.o 00:03:25.589 CC lib/env_dpdk/sigbus_handler.o 00:03:25.589 CC lib/env_dpdk/pci_dpdk.o 00:03:25.589 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:25.589 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:25.589 LIB libspdk_idxd.a 00:03:25.589 SO libspdk_idxd.so.12.1 00:03:25.847 SYMLINK libspdk_idxd.so 00:03:25.847 LIB libspdk_vmd.a 00:03:25.847 CC lib/jsonrpc/jsonrpc_server.o 00:03:25.847 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:25.847 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:25.847 CC lib/jsonrpc/jsonrpc_client.o 00:03:25.847 SO libspdk_vmd.so.6.0 00:03:25.847 SYMLINK libspdk_vmd.so 00:03:26.106 LIB libspdk_jsonrpc.a 00:03:26.106 SO libspdk_jsonrpc.so.6.0 00:03:26.106 SYMLINK libspdk_jsonrpc.so 00:03:26.364 CC lib/rpc/rpc.o 00:03:26.622 LIB libspdk_rpc.a 00:03:26.879 SO libspdk_rpc.so.6.0 00:03:26.879 LIB libspdk_env_dpdk.a 00:03:26.879 SYMLINK libspdk_rpc.so 00:03:26.879 SO libspdk_env_dpdk.so.15.1 00:03:27.136 CC lib/keyring/keyring.o 00:03:27.136 CC lib/trace/trace.o 00:03:27.136 CC lib/keyring/keyring_rpc.o 00:03:27.136 CC lib/trace/trace_flags.o 00:03:27.136 CC lib/trace/trace_rpc.o 00:03:27.136 CC lib/notify/notify.o 00:03:27.136 CC lib/notify/notify_rpc.o 00:03:27.136 SYMLINK libspdk_env_dpdk.so 00:03:27.393 LIB libspdk_notify.a 00:03:27.393 SO libspdk_notify.so.6.0 00:03:27.393 LIB libspdk_keyring.a 00:03:27.393 SO libspdk_keyring.so.2.0 00:03:27.393 SYMLINK libspdk_notify.so 00:03:27.393 SYMLINK libspdk_keyring.so 00:03:27.393 LIB libspdk_trace.a 00:03:27.649 SO libspdk_trace.so.11.0 00:03:27.649 SYMLINK libspdk_trace.so 00:03:27.906 CC lib/thread/iobuf.o 00:03:27.906 CC lib/thread/thread.o 00:03:27.906 CC lib/sock/sock.o 00:03:27.906 CC lib/sock/sock_rpc.o 00:03:28.470 LIB libspdk_sock.a 00:03:28.470 SO libspdk_sock.so.10.0 00:03:28.470 SYMLINK libspdk_sock.so 00:03:28.728 CC lib/nvme/nvme_ctrlr.o 00:03:28.728 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:28.728 CC lib/nvme/nvme_fabric.o 00:03:28.728 CC lib/nvme/nvme_ns_cmd.o 00:03:28.728 CC lib/nvme/nvme_ns.o 00:03:28.728 CC lib/nvme/nvme_pcie_common.o 00:03:28.728 CC lib/nvme/nvme_qpair.o 00:03:28.728 CC lib/nvme/nvme_pcie.o 00:03:28.728 CC lib/nvme/nvme.o 00:03:29.661 CC lib/nvme/nvme_quirks.o 00:03:29.918 CC lib/nvme/nvme_transport.o 00:03:29.918 LIB libspdk_thread.a 00:03:30.175 SO libspdk_thread.so.11.0 00:03:30.175 CC lib/nvme/nvme_discovery.o 00:03:30.175 SYMLINK libspdk_thread.so 00:03:30.175 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:30.175 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:30.175 CC lib/nvme/nvme_tcp.o 00:03:30.175 CC lib/nvme/nvme_opal.o 00:03:30.432 CC lib/nvme/nvme_io_msg.o 00:03:30.432 CC lib/nvme/nvme_poll_group.o 00:03:30.432 CC lib/nvme/nvme_zns.o 00:03:30.690 CC lib/nvme/nvme_stubs.o 00:03:30.947 CC lib/nvme/nvme_auth.o 00:03:30.947 CC lib/nvme/nvme_cuse.o 00:03:30.947 CC lib/nvme/nvme_rdma.o 00:03:31.205 CC lib/accel/accel.o 00:03:31.205 CC lib/blob/blobstore.o 00:03:31.205 CC lib/init/json_config.o 00:03:31.463 CC lib/virtio/virtio.o 00:03:31.463 CC lib/init/subsystem.o 00:03:31.463 CC lib/accel/accel_rpc.o 00:03:31.721 CC lib/init/subsystem_rpc.o 00:03:31.721 CC lib/virtio/virtio_vhost_user.o 00:03:31.721 CC lib/virtio/virtio_vfio_user.o 00:03:31.721 CC lib/init/rpc.o 00:03:31.721 CC lib/virtio/virtio_pci.o 00:03:31.987 LIB libspdk_init.a 00:03:31.987 CC lib/accel/accel_sw.o 00:03:31.987 SO libspdk_init.so.6.0 00:03:32.315 SYMLINK libspdk_init.so 00:03:32.315 CC lib/blob/zeroes.o 00:03:32.315 CC lib/blob/request.o 00:03:32.315 CC lib/blob/blob_bs_dev.o 00:03:32.315 LIB libspdk_virtio.a 00:03:32.315 SO libspdk_virtio.so.7.0 00:03:32.315 CC lib/fsdev/fsdev.o 00:03:32.315 SYMLINK libspdk_virtio.so 00:03:32.315 CC lib/fsdev/fsdev_io.o 00:03:32.573 CC lib/fsdev/fsdev_rpc.o 00:03:32.573 CC lib/event/app.o 00:03:32.573 CC lib/event/reactor.o 00:03:32.573 CC lib/event/log_rpc.o 00:03:32.573 CC lib/event/app_rpc.o 00:03:32.573 LIB libspdk_accel.a 00:03:32.573 CC lib/event/scheduler_static.o 00:03:32.831 SO libspdk_accel.so.16.0 00:03:32.831 SYMLINK libspdk_accel.so 00:03:32.831 LIB libspdk_nvme.a 00:03:33.087 CC lib/bdev/bdev.o 00:03:33.087 CC lib/bdev/bdev_rpc.o 00:03:33.087 CC lib/bdev/bdev_zone.o 00:03:33.087 CC lib/bdev/part.o 00:03:33.087 CC lib/bdev/scsi_nvme.o 00:03:33.087 LIB libspdk_fsdev.a 00:03:33.087 SO libspdk_nvme.so.15.0 00:03:33.345 SO libspdk_fsdev.so.2.0 00:03:33.345 LIB libspdk_event.a 00:03:33.345 SO libspdk_event.so.14.0 00:03:33.345 SYMLINK libspdk_fsdev.so 00:03:33.345 SYMLINK libspdk_event.so 00:03:33.602 SYMLINK libspdk_nvme.so 00:03:33.602 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:34.536 LIB libspdk_fuse_dispatcher.a 00:03:34.536 SO libspdk_fuse_dispatcher.so.1.0 00:03:34.536 SYMLINK libspdk_fuse_dispatcher.so 00:03:36.433 LIB libspdk_blob.a 00:03:36.433 SO libspdk_blob.so.11.0 00:03:36.433 SYMLINK libspdk_blob.so 00:03:36.691 CC lib/blobfs/blobfs.o 00:03:36.691 CC lib/blobfs/tree.o 00:03:36.691 CC lib/lvol/lvol.o 00:03:36.948 LIB libspdk_bdev.a 00:03:36.948 SO libspdk_bdev.so.17.0 00:03:37.206 SYMLINK libspdk_bdev.so 00:03:37.463 CC lib/scsi/lun.o 00:03:37.463 CC lib/scsi/port.o 00:03:37.464 CC lib/scsi/scsi.o 00:03:37.464 CC lib/nvmf/ctrlr.o 00:03:37.464 CC lib/scsi/dev.o 00:03:37.464 CC lib/nbd/nbd.o 00:03:37.464 CC lib/ublk/ublk.o 00:03:37.464 CC lib/ftl/ftl_core.o 00:03:37.464 CC lib/scsi/scsi_bdev.o 00:03:37.464 CC lib/scsi/scsi_pr.o 00:03:37.722 CC lib/nbd/nbd_rpc.o 00:03:37.722 CC lib/ublk/ublk_rpc.o 00:03:37.722 LIB libspdk_blobfs.a 00:03:37.722 LIB libspdk_lvol.a 00:03:37.722 SO libspdk_lvol.so.10.0 00:03:37.722 SO libspdk_blobfs.so.10.0 00:03:37.980 CC lib/scsi/scsi_rpc.o 00:03:37.980 LIB libspdk_nbd.a 00:03:37.980 SYMLINK libspdk_blobfs.so 00:03:37.980 CC lib/nvmf/ctrlr_discovery.o 00:03:37.980 CC lib/ftl/ftl_init.o 00:03:37.980 SYMLINK libspdk_lvol.so 00:03:37.980 CC lib/nvmf/ctrlr_bdev.o 00:03:37.980 SO libspdk_nbd.so.7.0 00:03:37.980 CC lib/nvmf/subsystem.o 00:03:37.980 CC lib/scsi/task.o 00:03:37.980 SYMLINK libspdk_nbd.so 00:03:37.980 CC lib/nvmf/nvmf.o 00:03:37.980 CC lib/nvmf/nvmf_rpc.o 00:03:38.238 CC lib/ftl/ftl_layout.o 00:03:38.238 CC lib/nvmf/transport.o 00:03:38.239 LIB libspdk_scsi.a 00:03:38.239 SO libspdk_scsi.so.9.0 00:03:38.497 SYMLINK libspdk_scsi.so 00:03:38.497 CC lib/ftl/ftl_debug.o 00:03:38.497 LIB libspdk_ublk.a 00:03:38.497 SO libspdk_ublk.so.3.0 00:03:38.497 CC lib/ftl/ftl_io.o 00:03:38.754 SYMLINK libspdk_ublk.so 00:03:38.754 CC lib/nvmf/tcp.o 00:03:38.754 CC lib/ftl/ftl_sb.o 00:03:39.012 CC lib/nvmf/stubs.o 00:03:39.012 CC lib/nvmf/mdns_server.o 00:03:39.012 CC lib/nvmf/rdma.o 00:03:39.012 CC lib/nvmf/auth.o 00:03:39.270 CC lib/ftl/ftl_l2p.o 00:03:39.270 CC lib/iscsi/conn.o 00:03:39.529 CC lib/iscsi/init_grp.o 00:03:39.529 CC lib/ftl/ftl_l2p_flat.o 00:03:39.788 CC lib/iscsi/iscsi.o 00:03:40.053 CC lib/iscsi/param.o 00:03:40.053 CC lib/ftl/ftl_nv_cache.o 00:03:40.053 CC lib/iscsi/portal_grp.o 00:03:40.053 CC lib/iscsi/tgt_node.o 00:03:40.053 CC lib/vhost/vhost.o 00:03:40.309 CC lib/iscsi/iscsi_subsystem.o 00:03:40.309 CC lib/iscsi/iscsi_rpc.o 00:03:40.567 CC lib/iscsi/task.o 00:03:40.825 CC lib/ftl/ftl_band.o 00:03:40.825 CC lib/ftl/ftl_band_ops.o 00:03:40.825 CC lib/ftl/ftl_writer.o 00:03:40.825 CC lib/vhost/vhost_rpc.o 00:03:40.825 CC lib/vhost/vhost_scsi.o 00:03:41.083 CC lib/vhost/vhost_blk.o 00:03:41.083 CC lib/vhost/rte_vhost_user.o 00:03:41.083 CC lib/ftl/ftl_rq.o 00:03:41.340 CC lib/ftl/ftl_reloc.o 00:03:41.598 CC lib/ftl/ftl_l2p_cache.o 00:03:41.598 CC lib/ftl/ftl_p2l.o 00:03:41.598 LIB libspdk_iscsi.a 00:03:41.598 CC lib/ftl/ftl_p2l_log.o 00:03:41.855 SO libspdk_iscsi.so.8.0 00:03:41.855 CC lib/ftl/mngt/ftl_mngt.o 00:03:42.113 SYMLINK libspdk_iscsi.so 00:03:42.113 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:42.113 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:42.113 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:42.113 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:42.113 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:42.370 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:42.370 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:42.370 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:42.370 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:42.370 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:42.370 LIB libspdk_nvmf.a 00:03:42.370 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:42.627 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:42.627 CC lib/ftl/utils/ftl_conf.o 00:03:42.627 CC lib/ftl/utils/ftl_md.o 00:03:42.627 CC lib/ftl/utils/ftl_mempool.o 00:03:42.627 CC lib/ftl/utils/ftl_bitmap.o 00:03:42.627 CC lib/ftl/utils/ftl_property.o 00:03:42.627 SO libspdk_nvmf.so.20.0 00:03:42.627 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:42.627 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:42.885 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:42.885 LIB libspdk_vhost.a 00:03:42.885 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:42.885 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:42.885 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:42.885 SO libspdk_vhost.so.8.0 00:03:42.885 SYMLINK libspdk_nvmf.so 00:03:42.885 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:42.885 SYMLINK libspdk_vhost.so 00:03:42.885 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:42.885 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:42.885 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:42.885 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:43.154 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:43.154 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:43.154 CC lib/ftl/base/ftl_base_dev.o 00:03:43.154 CC lib/ftl/base/ftl_base_bdev.o 00:03:43.154 CC lib/ftl/ftl_trace.o 00:03:43.412 LIB libspdk_ftl.a 00:03:43.670 SO libspdk_ftl.so.9.0 00:03:43.966 SYMLINK libspdk_ftl.so 00:03:44.531 CC module/env_dpdk/env_dpdk_rpc.o 00:03:44.531 CC module/scheduler/gscheduler/gscheduler.o 00:03:44.531 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:44.531 CC module/keyring/file/keyring.o 00:03:44.531 CC module/blob/bdev/blob_bdev.o 00:03:44.531 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:44.531 CC module/fsdev/aio/fsdev_aio.o 00:03:44.531 CC module/keyring/linux/keyring.o 00:03:44.531 CC module/sock/posix/posix.o 00:03:44.531 CC module/accel/error/accel_error.o 00:03:44.531 LIB libspdk_env_dpdk_rpc.a 00:03:44.531 SO libspdk_env_dpdk_rpc.so.6.0 00:03:44.531 SYMLINK libspdk_env_dpdk_rpc.so 00:03:44.531 CC module/keyring/linux/keyring_rpc.o 00:03:44.789 CC module/accel/error/accel_error_rpc.o 00:03:44.789 CC module/keyring/file/keyring_rpc.o 00:03:44.789 LIB libspdk_scheduler_dpdk_governor.a 00:03:44.789 LIB libspdk_scheduler_gscheduler.a 00:03:44.789 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:44.789 SO libspdk_scheduler_gscheduler.so.4.0 00:03:44.789 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:44.789 LIB libspdk_scheduler_dynamic.a 00:03:44.789 LIB libspdk_keyring_linux.a 00:03:44.789 SO libspdk_scheduler_dynamic.so.4.0 00:03:44.789 SO libspdk_keyring_linux.so.1.0 00:03:44.789 SYMLINK libspdk_scheduler_gscheduler.so 00:03:44.789 LIB libspdk_keyring_file.a 00:03:44.789 LIB libspdk_accel_error.a 00:03:44.789 SO libspdk_keyring_file.so.2.0 00:03:44.789 SO libspdk_accel_error.so.2.0 00:03:44.789 LIB libspdk_blob_bdev.a 00:03:44.789 SYMLINK libspdk_scheduler_dynamic.so 00:03:44.789 SYMLINK libspdk_keyring_linux.so 00:03:44.789 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:45.046 SO libspdk_blob_bdev.so.11.0 00:03:45.046 SYMLINK libspdk_accel_error.so 00:03:45.046 CC module/fsdev/aio/linux_aio_mgr.o 00:03:45.046 SYMLINK libspdk_keyring_file.so 00:03:45.046 CC module/accel/ioat/accel_ioat.o 00:03:45.046 CC module/accel/ioat/accel_ioat_rpc.o 00:03:45.046 CC module/accel/dsa/accel_dsa.o 00:03:45.046 CC module/accel/dsa/accel_dsa_rpc.o 00:03:45.046 SYMLINK libspdk_blob_bdev.so 00:03:45.046 CC module/accel/iaa/accel_iaa.o 00:03:45.304 CC module/accel/iaa/accel_iaa_rpc.o 00:03:45.304 LIB libspdk_accel_ioat.a 00:03:45.304 SO libspdk_accel_ioat.so.6.0 00:03:45.304 CC module/bdev/delay/vbdev_delay.o 00:03:45.304 CC module/bdev/error/vbdev_error.o 00:03:45.304 CC module/bdev/gpt/gpt.o 00:03:45.304 LIB libspdk_accel_dsa.a 00:03:45.304 SYMLINK libspdk_accel_ioat.so 00:03:45.304 CC module/bdev/gpt/vbdev_gpt.o 00:03:45.304 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:45.304 SO libspdk_accel_dsa.so.5.0 00:03:45.304 CC module/blobfs/bdev/blobfs_bdev.o 00:03:45.304 LIB libspdk_accel_iaa.a 00:03:45.304 LIB libspdk_fsdev_aio.a 00:03:45.562 SO libspdk_accel_iaa.so.3.0 00:03:45.562 SO libspdk_fsdev_aio.so.1.0 00:03:45.562 SYMLINK libspdk_accel_dsa.so 00:03:45.562 SYMLINK libspdk_accel_iaa.so 00:03:45.563 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:45.563 SYMLINK libspdk_fsdev_aio.so 00:03:45.563 CC module/bdev/error/vbdev_error_rpc.o 00:03:45.563 LIB libspdk_sock_posix.a 00:03:45.563 SO libspdk_sock_posix.so.6.0 00:03:45.563 CC module/bdev/lvol/vbdev_lvol.o 00:03:45.820 SYMLINK libspdk_sock_posix.so 00:03:45.820 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:45.820 LIB libspdk_blobfs_bdev.a 00:03:45.820 LIB libspdk_bdev_gpt.a 00:03:45.820 SO libspdk_blobfs_bdev.so.6.0 00:03:45.820 CC module/bdev/malloc/bdev_malloc.o 00:03:45.820 SO libspdk_bdev_gpt.so.6.0 00:03:45.820 LIB libspdk_bdev_error.a 00:03:45.820 CC module/bdev/null/bdev_null.o 00:03:45.820 LIB libspdk_bdev_delay.a 00:03:45.820 CC module/bdev/nvme/bdev_nvme.o 00:03:45.820 SO libspdk_bdev_error.so.6.0 00:03:45.820 SO libspdk_bdev_delay.so.6.0 00:03:45.820 CC module/bdev/passthru/vbdev_passthru.o 00:03:45.820 SYMLINK libspdk_bdev_gpt.so 00:03:45.820 SYMLINK libspdk_blobfs_bdev.so 00:03:45.820 CC module/bdev/null/bdev_null_rpc.o 00:03:45.820 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:45.820 SYMLINK libspdk_bdev_error.so 00:03:45.820 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:45.820 SYMLINK libspdk_bdev_delay.so 00:03:45.820 CC module/bdev/nvme/nvme_rpc.o 00:03:46.078 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:46.078 LIB libspdk_bdev_null.a 00:03:46.078 SO libspdk_bdev_null.so.6.0 00:03:46.335 LIB libspdk_bdev_malloc.a 00:03:46.335 SYMLINK libspdk_bdev_null.so 00:03:46.335 LIB libspdk_bdev_passthru.a 00:03:46.335 SO libspdk_bdev_malloc.so.6.0 00:03:46.335 SO libspdk_bdev_passthru.so.6.0 00:03:46.335 CC module/bdev/raid/bdev_raid.o 00:03:46.335 LIB libspdk_bdev_lvol.a 00:03:46.335 CC module/bdev/split/vbdev_split.o 00:03:46.335 SYMLINK libspdk_bdev_malloc.so 00:03:46.335 CC module/bdev/split/vbdev_split_rpc.o 00:03:46.335 SO libspdk_bdev_lvol.so.6.0 00:03:46.335 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:46.335 SYMLINK libspdk_bdev_passthru.so 00:03:46.335 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:46.335 CC module/bdev/xnvme/bdev_xnvme.o 00:03:46.592 CC module/bdev/aio/bdev_aio.o 00:03:46.592 SYMLINK libspdk_bdev_lvol.so 00:03:46.592 CC module/bdev/nvme/bdev_mdns_client.o 00:03:46.592 CC module/bdev/nvme/vbdev_opal.o 00:03:46.592 CC module/bdev/ftl/bdev_ftl.o 00:03:46.592 LIB libspdk_bdev_split.a 00:03:46.592 SO libspdk_bdev_split.so.6.0 00:03:46.850 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:46.850 SYMLINK libspdk_bdev_split.so 00:03:46.850 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:46.850 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:46.850 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:46.850 LIB libspdk_bdev_zone_block.a 00:03:46.850 CC module/bdev/aio/bdev_aio_rpc.o 00:03:47.107 SO libspdk_bdev_zone_block.so.6.0 00:03:47.107 CC module/bdev/raid/bdev_raid_rpc.o 00:03:47.107 SYMLINK libspdk_bdev_zone_block.so 00:03:47.107 LIB libspdk_bdev_xnvme.a 00:03:47.107 CC module/bdev/raid/bdev_raid_sb.o 00:03:47.107 LIB libspdk_bdev_ftl.a 00:03:47.107 CC module/bdev/raid/raid0.o 00:03:47.107 SO libspdk_bdev_xnvme.so.3.0 00:03:47.107 SO libspdk_bdev_ftl.so.6.0 00:03:47.365 CC module/bdev/iscsi/bdev_iscsi.o 00:03:47.365 SYMLINK libspdk_bdev_ftl.so 00:03:47.365 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:47.365 LIB libspdk_bdev_aio.a 00:03:47.365 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:47.365 SYMLINK libspdk_bdev_xnvme.so 00:03:47.365 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:47.365 SO libspdk_bdev_aio.so.6.0 00:03:47.365 CC module/bdev/raid/raid1.o 00:03:47.365 SYMLINK libspdk_bdev_aio.so 00:03:47.365 CC module/bdev/raid/concat.o 00:03:47.365 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:47.931 LIB libspdk_bdev_iscsi.a 00:03:47.931 SO libspdk_bdev_iscsi.so.6.0 00:03:47.931 SYMLINK libspdk_bdev_iscsi.so 00:03:47.931 LIB libspdk_bdev_virtio.a 00:03:47.931 SO libspdk_bdev_virtio.so.6.0 00:03:48.188 SYMLINK libspdk_bdev_virtio.so 00:03:48.188 LIB libspdk_bdev_raid.a 00:03:48.445 SO libspdk_bdev_raid.so.6.0 00:03:48.445 SYMLINK libspdk_bdev_raid.so 00:03:49.818 LIB libspdk_bdev_nvme.a 00:03:49.818 SO libspdk_bdev_nvme.so.7.1 00:03:49.818 SYMLINK libspdk_bdev_nvme.so 00:03:50.386 CC module/event/subsystems/vmd/vmd.o 00:03:50.386 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:50.386 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:50.386 CC module/event/subsystems/sock/sock.o 00:03:50.386 CC module/event/subsystems/iobuf/iobuf.o 00:03:50.386 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:50.386 CC module/event/subsystems/keyring/keyring.o 00:03:50.386 CC module/event/subsystems/fsdev/fsdev.o 00:03:50.386 CC module/event/subsystems/scheduler/scheduler.o 00:03:50.386 LIB libspdk_event_vmd.a 00:03:50.644 SO libspdk_event_vmd.so.6.0 00:03:50.644 LIB libspdk_event_sock.a 00:03:50.644 LIB libspdk_event_keyring.a 00:03:50.644 LIB libspdk_event_fsdev.a 00:03:50.644 LIB libspdk_event_iobuf.a 00:03:50.644 LIB libspdk_event_vhost_blk.a 00:03:50.644 SO libspdk_event_sock.so.5.0 00:03:50.644 LIB libspdk_event_scheduler.a 00:03:50.644 SO libspdk_event_keyring.so.1.0 00:03:50.644 SO libspdk_event_fsdev.so.1.0 00:03:50.644 SO libspdk_event_vhost_blk.so.3.0 00:03:50.644 SYMLINK libspdk_event_vmd.so 00:03:50.644 SO libspdk_event_scheduler.so.4.0 00:03:50.644 SO libspdk_event_iobuf.so.3.0 00:03:50.644 SYMLINK libspdk_event_sock.so 00:03:50.644 SYMLINK libspdk_event_vhost_blk.so 00:03:50.644 SYMLINK libspdk_event_scheduler.so 00:03:50.644 SYMLINK libspdk_event_keyring.so 00:03:50.644 SYMLINK libspdk_event_fsdev.so 00:03:50.644 SYMLINK libspdk_event_iobuf.so 00:03:50.902 CC module/event/subsystems/accel/accel.o 00:03:51.161 LIB libspdk_event_accel.a 00:03:51.161 SO libspdk_event_accel.so.6.0 00:03:51.161 SYMLINK libspdk_event_accel.so 00:03:51.419 CC module/event/subsystems/bdev/bdev.o 00:03:51.677 LIB libspdk_event_bdev.a 00:03:51.677 SO libspdk_event_bdev.so.6.0 00:03:51.677 SYMLINK libspdk_event_bdev.so 00:03:51.935 CC module/event/subsystems/scsi/scsi.o 00:03:51.935 CC module/event/subsystems/ublk/ublk.o 00:03:51.935 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:51.935 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:51.935 CC module/event/subsystems/nbd/nbd.o 00:03:52.193 LIB libspdk_event_scsi.a 00:03:52.193 LIB libspdk_event_ublk.a 00:03:52.193 SO libspdk_event_scsi.so.6.0 00:03:52.193 LIB libspdk_event_nbd.a 00:03:52.193 SO libspdk_event_ublk.so.3.0 00:03:52.193 SO libspdk_event_nbd.so.6.0 00:03:52.193 SYMLINK libspdk_event_scsi.so 00:03:52.193 SYMLINK libspdk_event_ublk.so 00:03:52.193 SYMLINK libspdk_event_nbd.so 00:03:52.452 LIB libspdk_event_nvmf.a 00:03:52.452 SO libspdk_event_nvmf.so.6.0 00:03:52.452 SYMLINK libspdk_event_nvmf.so 00:03:52.452 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:52.452 CC module/event/subsystems/iscsi/iscsi.o 00:03:52.710 LIB libspdk_event_vhost_scsi.a 00:03:52.710 LIB libspdk_event_iscsi.a 00:03:52.710 SO libspdk_event_vhost_scsi.so.3.0 00:03:52.710 SO libspdk_event_iscsi.so.6.0 00:03:52.710 SYMLINK libspdk_event_vhost_scsi.so 00:03:52.710 SYMLINK libspdk_event_iscsi.so 00:03:52.968 SO libspdk.so.6.0 00:03:52.968 SYMLINK libspdk.so 00:03:53.227 CC app/trace_record/trace_record.o 00:03:53.227 CC app/spdk_nvme_perf/perf.o 00:03:53.227 CXX app/trace/trace.o 00:03:53.227 CC app/spdk_lspci/spdk_lspci.o 00:03:53.227 CC app/iscsi_tgt/iscsi_tgt.o 00:03:53.227 CC app/nvmf_tgt/nvmf_main.o 00:03:53.484 CC test/thread/poller_perf/poller_perf.o 00:03:53.484 CC app/spdk_tgt/spdk_tgt.o 00:03:53.484 CC examples/util/zipf/zipf.o 00:03:53.484 CC test/dma/test_dma/test_dma.o 00:03:53.484 LINK spdk_lspci 00:03:53.742 LINK iscsi_tgt 00:03:53.742 LINK nvmf_tgt 00:03:53.742 LINK poller_perf 00:03:53.742 LINK zipf 00:03:53.742 LINK spdk_tgt 00:03:53.742 LINK spdk_trace_record 00:03:53.742 LINK spdk_trace 00:03:53.742 CC app/spdk_nvme_identify/identify.o 00:03:54.000 TEST_HEADER include/spdk/accel.h 00:03:54.000 TEST_HEADER include/spdk/accel_module.h 00:03:54.000 TEST_HEADER include/spdk/assert.h 00:03:54.000 TEST_HEADER include/spdk/barrier.h 00:03:54.000 TEST_HEADER include/spdk/base64.h 00:03:54.000 TEST_HEADER include/spdk/bdev.h 00:03:54.000 TEST_HEADER include/spdk/bdev_module.h 00:03:54.000 TEST_HEADER include/spdk/bdev_zone.h 00:03:54.000 LINK test_dma 00:03:54.001 TEST_HEADER include/spdk/bit_array.h 00:03:54.001 TEST_HEADER include/spdk/bit_pool.h 00:03:54.001 TEST_HEADER include/spdk/blob_bdev.h 00:03:54.001 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:54.001 TEST_HEADER include/spdk/blobfs.h 00:03:54.001 TEST_HEADER include/spdk/blob.h 00:03:54.001 TEST_HEADER include/spdk/conf.h 00:03:54.001 TEST_HEADER include/spdk/config.h 00:03:54.001 TEST_HEADER include/spdk/cpuset.h 00:03:54.001 TEST_HEADER include/spdk/crc16.h 00:03:54.001 TEST_HEADER include/spdk/crc32.h 00:03:54.001 TEST_HEADER include/spdk/crc64.h 00:03:54.001 TEST_HEADER include/spdk/dif.h 00:03:54.001 TEST_HEADER include/spdk/dma.h 00:03:54.001 TEST_HEADER include/spdk/endian.h 00:03:54.001 TEST_HEADER include/spdk/env_dpdk.h 00:03:54.001 TEST_HEADER include/spdk/env.h 00:03:54.001 TEST_HEADER include/spdk/event.h 00:03:54.001 TEST_HEADER include/spdk/fd_group.h 00:03:54.001 TEST_HEADER include/spdk/fd.h 00:03:54.001 TEST_HEADER include/spdk/file.h 00:03:54.001 TEST_HEADER include/spdk/fsdev.h 00:03:54.001 TEST_HEADER include/spdk/fsdev_module.h 00:03:54.001 TEST_HEADER include/spdk/ftl.h 00:03:54.001 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:54.001 TEST_HEADER include/spdk/gpt_spec.h 00:03:54.001 TEST_HEADER include/spdk/hexlify.h 00:03:54.001 TEST_HEADER include/spdk/histogram_data.h 00:03:54.001 TEST_HEADER include/spdk/idxd.h 00:03:54.258 TEST_HEADER include/spdk/idxd_spec.h 00:03:54.258 TEST_HEADER include/spdk/init.h 00:03:54.258 CC test/app/bdev_svc/bdev_svc.o 00:03:54.258 TEST_HEADER include/spdk/ioat.h 00:03:54.258 TEST_HEADER include/spdk/ioat_spec.h 00:03:54.258 TEST_HEADER include/spdk/iscsi_spec.h 00:03:54.258 TEST_HEADER include/spdk/json.h 00:03:54.258 TEST_HEADER include/spdk/jsonrpc.h 00:03:54.258 TEST_HEADER include/spdk/keyring.h 00:03:54.258 TEST_HEADER include/spdk/keyring_module.h 00:03:54.258 TEST_HEADER include/spdk/likely.h 00:03:54.258 TEST_HEADER include/spdk/log.h 00:03:54.258 CC app/spdk_nvme_discover/discovery_aer.o 00:03:54.258 TEST_HEADER include/spdk/lvol.h 00:03:54.258 TEST_HEADER include/spdk/md5.h 00:03:54.258 TEST_HEADER include/spdk/memory.h 00:03:54.258 TEST_HEADER include/spdk/mmio.h 00:03:54.258 TEST_HEADER include/spdk/nbd.h 00:03:54.258 TEST_HEADER include/spdk/net.h 00:03:54.258 TEST_HEADER include/spdk/notify.h 00:03:54.258 TEST_HEADER include/spdk/nvme.h 00:03:54.258 TEST_HEADER include/spdk/nvme_intel.h 00:03:54.258 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:54.258 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:54.258 TEST_HEADER include/spdk/nvme_spec.h 00:03:54.258 TEST_HEADER include/spdk/nvme_zns.h 00:03:54.258 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:54.258 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:54.258 TEST_HEADER include/spdk/nvmf.h 00:03:54.258 TEST_HEADER include/spdk/nvmf_spec.h 00:03:54.258 TEST_HEADER include/spdk/nvmf_transport.h 00:03:54.258 TEST_HEADER include/spdk/opal.h 00:03:54.258 TEST_HEADER include/spdk/opal_spec.h 00:03:54.258 TEST_HEADER include/spdk/pci_ids.h 00:03:54.258 TEST_HEADER include/spdk/pipe.h 00:03:54.258 TEST_HEADER include/spdk/queue.h 00:03:54.258 TEST_HEADER include/spdk/reduce.h 00:03:54.258 TEST_HEADER include/spdk/rpc.h 00:03:54.258 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:54.258 TEST_HEADER include/spdk/scheduler.h 00:03:54.258 TEST_HEADER include/spdk/scsi.h 00:03:54.258 TEST_HEADER include/spdk/scsi_spec.h 00:03:54.258 TEST_HEADER include/spdk/sock.h 00:03:54.258 TEST_HEADER include/spdk/stdinc.h 00:03:54.258 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:54.258 TEST_HEADER include/spdk/string.h 00:03:54.258 TEST_HEADER include/spdk/thread.h 00:03:54.258 TEST_HEADER include/spdk/trace.h 00:03:54.258 TEST_HEADER include/spdk/trace_parser.h 00:03:54.258 TEST_HEADER include/spdk/tree.h 00:03:54.258 CC examples/ioat/perf/perf.o 00:03:54.258 TEST_HEADER include/spdk/ublk.h 00:03:54.258 TEST_HEADER include/spdk/util.h 00:03:54.258 TEST_HEADER include/spdk/uuid.h 00:03:54.258 TEST_HEADER include/spdk/version.h 00:03:54.258 CC examples/vmd/lsvmd/lsvmd.o 00:03:54.258 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:54.259 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:54.259 TEST_HEADER include/spdk/vhost.h 00:03:54.259 TEST_HEADER include/spdk/vmd.h 00:03:54.259 TEST_HEADER include/spdk/xor.h 00:03:54.259 TEST_HEADER include/spdk/zipf.h 00:03:54.259 CXX test/cpp_headers/accel.o 00:03:54.516 CXX test/cpp_headers/accel_module.o 00:03:54.516 LINK bdev_svc 00:03:54.516 LINK ioat_perf 00:03:54.516 LINK lsvmd 00:03:54.516 LINK spdk_nvme_discover 00:03:54.516 CC examples/vmd/led/led.o 00:03:54.775 CXX test/cpp_headers/assert.o 00:03:54.775 CXX test/cpp_headers/barrier.o 00:03:54.775 CXX test/cpp_headers/base64.o 00:03:54.775 LINK led 00:03:55.033 CC examples/ioat/verify/verify.o 00:03:55.033 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:55.033 LINK spdk_nvme_perf 00:03:55.033 LINK nvme_fuzz 00:03:55.290 CXX test/cpp_headers/bdev.o 00:03:55.290 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:55.290 LINK verify 00:03:55.290 CC test/env/mem_callbacks/mem_callbacks.o 00:03:55.290 CC test/app/histogram_perf/histogram_perf.o 00:03:55.290 CC test/app/jsoncat/jsoncat.o 00:03:55.290 CXX test/cpp_headers/bdev_module.o 00:03:55.290 LINK spdk_nvme_identify 00:03:55.548 CC test/app/stub/stub.o 00:03:55.548 LINK histogram_perf 00:03:55.548 LINK jsoncat 00:03:55.806 CXX test/cpp_headers/bdev_zone.o 00:03:55.806 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:55.806 CC examples/idxd/perf/perf.o 00:03:55.806 LINK stub 00:03:55.806 CC app/spdk_top/spdk_top.o 00:03:55.806 LINK vhost_fuzz 00:03:56.063 LINK mem_callbacks 00:03:56.063 CXX test/cpp_headers/bit_array.o 00:03:56.063 CXX test/cpp_headers/bit_pool.o 00:03:56.063 CC test/event/event_perf/event_perf.o 00:03:56.063 CC test/event/reactor/reactor.o 00:03:56.063 LINK interrupt_tgt 00:03:56.063 CC examples/thread/thread/thread_ex.o 00:03:56.321 CXX test/cpp_headers/blob_bdev.o 00:03:56.321 CC test/env/vtophys/vtophys.o 00:03:56.321 CC test/nvme/aer/aer.o 00:03:56.578 LINK reactor 00:03:56.578 LINK event_perf 00:03:56.578 LINK thread 00:03:56.578 CXX test/cpp_headers/blobfs_bdev.o 00:03:56.578 LINK idxd_perf 00:03:56.578 LINK vtophys 00:03:56.578 CXX test/cpp_headers/blobfs.o 00:03:56.578 CC test/nvme/reset/reset.o 00:03:56.835 LINK aer 00:03:56.835 CXX test/cpp_headers/blob.o 00:03:56.835 CC test/nvme/sgl/sgl.o 00:03:56.835 CC test/event/reactor_perf/reactor_perf.o 00:03:56.835 CXX test/cpp_headers/conf.o 00:03:57.092 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:57.092 LINK reactor_perf 00:03:57.092 CXX test/cpp_headers/config.o 00:03:57.092 LINK iscsi_fuzz 00:03:57.092 CC examples/sock/hello_world/hello_sock.o 00:03:57.092 CXX test/cpp_headers/cpuset.o 00:03:57.092 LINK reset 00:03:57.092 LINK sgl 00:03:57.092 CC test/nvme/e2edp/nvme_dp.o 00:03:57.350 LINK env_dpdk_post_init 00:03:57.350 CC test/event/app_repeat/app_repeat.o 00:03:57.350 CC test/rpc_client/rpc_client_test.o 00:03:57.350 CXX test/cpp_headers/crc16.o 00:03:57.350 CXX test/cpp_headers/crc32.o 00:03:57.608 CC test/event/scheduler/scheduler.o 00:03:57.608 LINK app_repeat 00:03:57.608 CC test/env/memory/memory_ut.o 00:03:57.608 CXX test/cpp_headers/crc64.o 00:03:57.608 LINK hello_sock 00:03:57.608 LINK nvme_dp 00:03:57.608 CC test/env/pci/pci_ut.o 00:03:57.608 LINK rpc_client_test 00:03:57.866 CC test/accel/dif/dif.o 00:03:57.866 CXX test/cpp_headers/dif.o 00:03:57.866 LINK spdk_top 00:03:57.866 LINK scheduler 00:03:57.866 CXX test/cpp_headers/dma.o 00:03:57.866 CC test/nvme/overhead/overhead.o 00:03:58.124 CC examples/accel/perf/accel_perf.o 00:03:58.124 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:58.124 CC test/blobfs/mkfs/mkfs.o 00:03:58.124 CC app/vhost/vhost.o 00:03:58.124 CXX test/cpp_headers/endian.o 00:03:58.124 LINK pci_ut 00:03:58.381 CC app/spdk_dd/spdk_dd.o 00:03:58.381 LINK vhost 00:03:58.381 CXX test/cpp_headers/env_dpdk.o 00:03:58.381 LINK overhead 00:03:58.381 LINK mkfs 00:03:58.381 LINK hello_fsdev 00:03:58.381 CXX test/cpp_headers/env.o 00:03:58.638 CC test/nvme/err_injection/err_injection.o 00:03:58.638 LINK dif 00:03:58.638 LINK accel_perf 00:03:58.638 CXX test/cpp_headers/event.o 00:03:58.895 CC examples/nvme/hello_world/hello_world.o 00:03:58.895 CC app/fio/nvme/fio_plugin.o 00:03:58.895 LINK spdk_dd 00:03:58.895 CC examples/blob/hello_world/hello_blob.o 00:03:58.895 CC test/lvol/esnap/esnap.o 00:03:58.895 LINK err_injection 00:03:58.895 CXX test/cpp_headers/fd_group.o 00:03:58.895 CC test/nvme/startup/startup.o 00:03:59.153 CXX test/cpp_headers/fd.o 00:03:59.153 CXX test/cpp_headers/file.o 00:03:59.153 LINK hello_world 00:03:59.153 LINK memory_ut 00:03:59.153 LINK hello_blob 00:03:59.153 CC test/bdev/bdevio/bdevio.o 00:03:59.153 LINK startup 00:03:59.153 CXX test/cpp_headers/fsdev.o 00:03:59.153 CXX test/cpp_headers/fsdev_module.o 00:03:59.153 CC app/fio/bdev/fio_plugin.o 00:03:59.409 CC examples/nvme/reconnect/reconnect.o 00:03:59.410 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:59.410 CXX test/cpp_headers/ftl.o 00:03:59.410 CC test/nvme/reserve/reserve.o 00:03:59.410 CC examples/blob/cli/blobcli.o 00:03:59.667 CC test/nvme/simple_copy/simple_copy.o 00:03:59.667 LINK spdk_nvme 00:03:59.667 LINK bdevio 00:03:59.667 CXX test/cpp_headers/fuse_dispatcher.o 00:03:59.667 LINK reserve 00:03:59.667 CC test/nvme/connect_stress/connect_stress.o 00:03:59.667 LINK reconnect 00:03:59.925 CXX test/cpp_headers/gpt_spec.o 00:03:59.925 LINK simple_copy 00:03:59.925 LINK spdk_bdev 00:03:59.925 CXX test/cpp_headers/hexlify.o 00:03:59.925 CC test/nvme/boot_partition/boot_partition.o 00:03:59.925 CXX test/cpp_headers/histogram_data.o 00:03:59.925 CC examples/nvme/arbitration/arbitration.o 00:04:00.182 LINK connect_stress 00:04:00.182 LINK nvme_manage 00:04:00.182 LINK blobcli 00:04:00.182 CC examples/nvme/hotplug/hotplug.o 00:04:00.182 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:00.182 LINK boot_partition 00:04:00.182 CC examples/nvme/abort/abort.o 00:04:00.182 CXX test/cpp_headers/idxd.o 00:04:00.441 CC test/nvme/compliance/nvme_compliance.o 00:04:00.441 LINK cmb_copy 00:04:00.441 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:00.441 LINK hotplug 00:04:00.441 CXX test/cpp_headers/idxd_spec.o 00:04:00.441 CC test/nvme/fused_ordering/fused_ordering.o 00:04:00.441 LINK arbitration 00:04:00.441 CC examples/bdev/hello_world/hello_bdev.o 00:04:00.698 LINK pmr_persistence 00:04:00.698 CXX test/cpp_headers/init.o 00:04:00.698 CC examples/bdev/bdevperf/bdevperf.o 00:04:00.698 LINK abort 00:04:00.698 LINK fused_ordering 00:04:00.698 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:00.698 LINK nvme_compliance 00:04:00.698 CXX test/cpp_headers/ioat.o 00:04:00.698 CC test/nvme/fdp/fdp.o 00:04:00.956 LINK hello_bdev 00:04:00.956 CXX test/cpp_headers/ioat_spec.o 00:04:00.956 CXX test/cpp_headers/iscsi_spec.o 00:04:00.956 CC test/nvme/cuse/cuse.o 00:04:00.956 LINK doorbell_aers 00:04:00.956 CXX test/cpp_headers/json.o 00:04:00.956 CXX test/cpp_headers/jsonrpc.o 00:04:00.956 CXX test/cpp_headers/keyring.o 00:04:01.215 CXX test/cpp_headers/keyring_module.o 00:04:01.215 CXX test/cpp_headers/likely.o 00:04:01.215 CXX test/cpp_headers/log.o 00:04:01.215 CXX test/cpp_headers/lvol.o 00:04:01.215 CXX test/cpp_headers/md5.o 00:04:01.215 CXX test/cpp_headers/memory.o 00:04:01.215 CXX test/cpp_headers/mmio.o 00:04:01.215 LINK fdp 00:04:01.215 CXX test/cpp_headers/nbd.o 00:04:01.215 CXX test/cpp_headers/net.o 00:04:01.215 CXX test/cpp_headers/notify.o 00:04:01.473 CXX test/cpp_headers/nvme.o 00:04:01.473 CXX test/cpp_headers/nvme_intel.o 00:04:01.473 CXX test/cpp_headers/nvme_ocssd.o 00:04:01.473 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:01.473 CXX test/cpp_headers/nvme_spec.o 00:04:01.473 CXX test/cpp_headers/nvme_zns.o 00:04:01.473 CXX test/cpp_headers/nvmf_cmd.o 00:04:01.473 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:01.473 CXX test/cpp_headers/nvmf.o 00:04:01.730 CXX test/cpp_headers/nvmf_spec.o 00:04:01.730 CXX test/cpp_headers/nvmf_transport.o 00:04:01.730 CXX test/cpp_headers/opal.o 00:04:01.730 CXX test/cpp_headers/opal_spec.o 00:04:01.730 LINK bdevperf 00:04:01.730 CXX test/cpp_headers/pci_ids.o 00:04:01.730 CXX test/cpp_headers/pipe.o 00:04:01.730 CXX test/cpp_headers/queue.o 00:04:01.730 CXX test/cpp_headers/reduce.o 00:04:01.730 CXX test/cpp_headers/rpc.o 00:04:01.987 CXX test/cpp_headers/scheduler.o 00:04:01.987 CXX test/cpp_headers/scsi.o 00:04:01.987 CXX test/cpp_headers/scsi_spec.o 00:04:01.987 CXX test/cpp_headers/sock.o 00:04:01.987 CXX test/cpp_headers/stdinc.o 00:04:01.987 CXX test/cpp_headers/string.o 00:04:01.987 CXX test/cpp_headers/thread.o 00:04:01.987 CXX test/cpp_headers/trace.o 00:04:01.987 CXX test/cpp_headers/trace_parser.o 00:04:02.244 CXX test/cpp_headers/tree.o 00:04:02.244 CC examples/nvmf/nvmf/nvmf.o 00:04:02.244 CXX test/cpp_headers/ublk.o 00:04:02.244 CXX test/cpp_headers/util.o 00:04:02.244 CXX test/cpp_headers/uuid.o 00:04:02.244 CXX test/cpp_headers/version.o 00:04:02.244 CXX test/cpp_headers/vfio_user_pci.o 00:04:02.244 CXX test/cpp_headers/vfio_user_spec.o 00:04:02.244 CXX test/cpp_headers/vhost.o 00:04:02.244 CXX test/cpp_headers/vmd.o 00:04:02.244 CXX test/cpp_headers/xor.o 00:04:02.502 CXX test/cpp_headers/zipf.o 00:04:02.502 LINK nvmf 00:04:02.502 LINK cuse 00:04:06.693 LINK esnap 00:04:06.693 ************************************ 00:04:06.693 END TEST make 00:04:06.693 ************************************ 00:04:06.693 00:04:06.693 real 1m48.344s 00:04:06.693 user 10m41.650s 00:04:06.693 sys 1m50.590s 00:04:06.693 17:54:22 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:04:06.693 17:54:22 make -- common/autotest_common.sh@10 -- $ set +x 00:04:06.693 17:54:22 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:06.693 17:54:22 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:06.693 17:54:22 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:06.693 17:54:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.693 17:54:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:06.693 17:54:22 -- pm/common@44 -- $ pid=5337 00:04:06.693 17:54:22 -- pm/common@50 -- $ kill -TERM 5337 00:04:06.693 17:54:22 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.693 17:54:22 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:06.693 17:54:22 -- pm/common@44 -- $ pid=5339 00:04:06.693 17:54:22 -- pm/common@50 -- $ kill -TERM 5339 00:04:06.693 17:54:22 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:06.693 17:54:22 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:06.693 17:54:22 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:06.693 17:54:22 -- common/autotest_common.sh@1691 -- # lcov --version 00:04:06.693 17:54:22 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:06.693 17:54:22 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:06.693 17:54:22 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:06.693 17:54:22 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:06.693 17:54:22 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:06.693 17:54:22 -- scripts/common.sh@336 -- # IFS=.-: 00:04:06.693 17:54:22 -- scripts/common.sh@336 -- # read -ra ver1 00:04:06.693 17:54:22 -- scripts/common.sh@337 -- # IFS=.-: 00:04:06.693 17:54:22 -- scripts/common.sh@337 -- # read -ra ver2 00:04:06.693 17:54:22 -- scripts/common.sh@338 -- # local 'op=<' 00:04:06.693 17:54:22 -- scripts/common.sh@340 -- # ver1_l=2 00:04:06.693 17:54:22 -- scripts/common.sh@341 -- # ver2_l=1 00:04:06.693 17:54:22 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:06.693 17:54:22 -- scripts/common.sh@344 -- # case "$op" in 00:04:06.693 17:54:22 -- scripts/common.sh@345 -- # : 1 00:04:06.693 17:54:22 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:06.693 17:54:22 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:06.693 17:54:22 -- scripts/common.sh@365 -- # decimal 1 00:04:06.693 17:54:22 -- scripts/common.sh@353 -- # local d=1 00:04:06.693 17:54:22 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:06.693 17:54:22 -- scripts/common.sh@355 -- # echo 1 00:04:06.693 17:54:22 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:06.693 17:54:22 -- scripts/common.sh@366 -- # decimal 2 00:04:06.693 17:54:22 -- scripts/common.sh@353 -- # local d=2 00:04:06.693 17:54:22 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:06.693 17:54:22 -- scripts/common.sh@355 -- # echo 2 00:04:06.693 17:54:23 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:06.693 17:54:23 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:06.693 17:54:23 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:06.693 17:54:23 -- scripts/common.sh@368 -- # return 0 00:04:06.693 17:54:23 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:06.693 17:54:23 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:06.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.693 --rc genhtml_branch_coverage=1 00:04:06.693 --rc genhtml_function_coverage=1 00:04:06.693 --rc genhtml_legend=1 00:04:06.693 --rc geninfo_all_blocks=1 00:04:06.693 --rc geninfo_unexecuted_blocks=1 00:04:06.693 00:04:06.693 ' 00:04:06.693 17:54:23 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:06.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.693 --rc genhtml_branch_coverage=1 00:04:06.693 --rc genhtml_function_coverage=1 00:04:06.693 --rc genhtml_legend=1 00:04:06.693 --rc geninfo_all_blocks=1 00:04:06.693 --rc geninfo_unexecuted_blocks=1 00:04:06.693 00:04:06.693 ' 00:04:06.693 17:54:23 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:06.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.693 --rc genhtml_branch_coverage=1 00:04:06.693 --rc genhtml_function_coverage=1 00:04:06.693 --rc genhtml_legend=1 00:04:06.693 --rc geninfo_all_blocks=1 00:04:06.693 --rc geninfo_unexecuted_blocks=1 00:04:06.693 00:04:06.693 ' 00:04:06.693 17:54:23 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:06.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:06.693 --rc genhtml_branch_coverage=1 00:04:06.693 --rc genhtml_function_coverage=1 00:04:06.693 --rc genhtml_legend=1 00:04:06.693 --rc geninfo_all_blocks=1 00:04:06.693 --rc geninfo_unexecuted_blocks=1 00:04:06.693 00:04:06.693 ' 00:04:06.693 17:54:23 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:06.693 17:54:23 -- nvmf/common.sh@7 -- # uname -s 00:04:06.693 17:54:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:06.693 17:54:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:06.693 17:54:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:06.693 17:54:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:06.693 17:54:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:06.693 17:54:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:06.693 17:54:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:06.693 17:54:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:06.693 17:54:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:06.693 17:54:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:06.693 17:54:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae374150-be72-4028-b88b-bc3663361fee 00:04:06.693 17:54:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=ae374150-be72-4028-b88b-bc3663361fee 00:04:06.693 17:54:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:06.693 17:54:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:06.693 17:54:23 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:06.693 17:54:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:06.693 17:54:23 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:06.693 17:54:23 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:06.693 17:54:23 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:06.693 17:54:23 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:06.693 17:54:23 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:06.693 17:54:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.693 17:54:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.693 17:54:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.693 17:54:23 -- paths/export.sh@5 -- # export PATH 00:04:06.693 17:54:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:06.693 17:54:23 -- nvmf/common.sh@51 -- # : 0 00:04:06.693 17:54:23 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:06.693 17:54:23 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:06.693 17:54:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:06.693 17:54:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:06.693 17:54:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:06.693 17:54:23 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:06.693 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:06.693 17:54:23 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:06.693 17:54:23 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:06.693 17:54:23 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:06.693 17:54:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:06.693 17:54:23 -- spdk/autotest.sh@32 -- # uname -s 00:04:06.693 17:54:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:06.693 17:54:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:06.693 17:54:23 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:06.693 17:54:23 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:06.693 17:54:23 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:06.693 17:54:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:06.693 17:54:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:06.694 17:54:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:06.694 17:54:23 -- spdk/autotest.sh@48 -- # udevadm_pid=54983 00:04:06.694 17:54:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:06.694 17:54:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:06.694 17:54:23 -- pm/common@17 -- # local monitor 00:04:06.694 17:54:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.694 17:54:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.694 17:54:23 -- pm/common@25 -- # sleep 1 00:04:06.694 17:54:23 -- pm/common@21 -- # date +%s 00:04:06.694 17:54:23 -- pm/common@21 -- # date +%s 00:04:06.694 17:54:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730138063 00:04:06.694 17:54:23 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1730138063 00:04:06.694 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730138063_collect-vmstat.pm.log 00:04:06.694 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1730138063_collect-cpu-load.pm.log 00:04:08.064 17:54:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:08.064 17:54:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:08.064 17:54:24 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:08.064 17:54:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.064 17:54:24 -- spdk/autotest.sh@59 -- # create_test_list 00:04:08.064 17:54:24 -- common/autotest_common.sh@750 -- # xtrace_disable 00:04:08.064 17:54:24 -- common/autotest_common.sh@10 -- # set +x 00:04:08.064 17:54:24 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:08.064 17:54:24 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:08.064 17:54:24 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:08.064 17:54:24 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:08.064 17:54:24 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:08.064 17:54:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:08.064 17:54:24 -- common/autotest_common.sh@1455 -- # uname 00:04:08.064 17:54:24 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:08.064 17:54:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:08.064 17:54:24 -- common/autotest_common.sh@1475 -- # uname 00:04:08.064 17:54:24 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:08.064 17:54:24 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:08.064 17:54:24 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:08.064 lcov: LCOV version 1.15 00:04:08.064 17:54:24 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:26.142 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:26.142 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:44.240 17:54:59 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:44.240 17:54:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:44.240 17:54:59 -- common/autotest_common.sh@10 -- # set +x 00:04:44.240 17:54:59 -- spdk/autotest.sh@78 -- # rm -f 00:04:44.240 17:54:59 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:44.240 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:44.499 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:44.499 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:44.499 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:04:44.499 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:04:44.757 17:55:00 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:44.757 17:55:00 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:44.757 17:55:00 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:44.757 17:55:00 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:44.757 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.757 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:44.757 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:44.757 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.757 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:44.757 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:44.757 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.757 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:04:44.757 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:04:44.757 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.757 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:04:44.757 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:04:44.757 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.757 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:04:44.757 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:04:44.757 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.757 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:04:44.757 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:04:44.757 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:44.757 17:55:00 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:04:44.757 17:55:00 -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:04:44.757 17:55:00 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:04:44.757 17:55:00 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:44.757 17:55:00 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:44.757 17:55:00 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.757 17:55:00 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.757 17:55:00 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:44.757 17:55:00 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:44.757 17:55:00 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:44.757 No valid GPT data, bailing 00:04:44.757 17:55:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:44.757 17:55:01 -- scripts/common.sh@394 -- # pt= 00:04:44.757 17:55:01 -- scripts/common.sh@395 -- # return 1 00:04:44.757 17:55:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:44.757 1+0 records in 00:04:44.757 1+0 records out 00:04:44.757 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00304933 s, 344 MB/s 00:04:44.757 17:55:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.758 17:55:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.758 17:55:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:44.758 17:55:01 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:44.758 17:55:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:44.758 No valid GPT data, bailing 00:04:44.758 17:55:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:44.758 17:55:01 -- scripts/common.sh@394 -- # pt= 00:04:44.758 17:55:01 -- scripts/common.sh@395 -- # return 1 00:04:44.758 17:55:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:44.758 1+0 records in 00:04:44.758 1+0 records out 00:04:44.758 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142456 s, 73.6 MB/s 00:04:44.758 17:55:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:44.758 17:55:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:44.758 17:55:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:04:44.758 17:55:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:04:44.758 17:55:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:04:44.758 No valid GPT data, bailing 00:04:44.758 17:55:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:04:45.015 17:55:01 -- scripts/common.sh@394 -- # pt= 00:04:45.015 17:55:01 -- scripts/common.sh@395 -- # return 1 00:04:45.015 17:55:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:04:45.015 1+0 records in 00:04:45.015 1+0 records out 00:04:45.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00434273 s, 241 MB/s 00:04:45.015 17:55:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:45.015 17:55:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:45.015 17:55:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n2 00:04:45.015 17:55:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n2 pt 00:04:45.015 17:55:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n2 00:04:45.015 No valid GPT data, bailing 00:04:45.015 17:55:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n2 00:04:45.015 17:55:01 -- scripts/common.sh@394 -- # pt= 00:04:45.015 17:55:01 -- scripts/common.sh@395 -- # return 1 00:04:45.015 17:55:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n2 bs=1M count=1 00:04:45.015 1+0 records in 00:04:45.015 1+0 records out 00:04:45.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00379986 s, 276 MB/s 00:04:45.015 17:55:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:45.015 17:55:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:45.015 17:55:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n3 00:04:45.015 17:55:01 -- scripts/common.sh@381 -- # local block=/dev/nvme2n3 pt 00:04:45.015 17:55:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n3 00:04:45.015 No valid GPT data, bailing 00:04:45.015 17:55:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n3 00:04:45.015 17:55:01 -- scripts/common.sh@394 -- # pt= 00:04:45.015 17:55:01 -- scripts/common.sh@395 -- # return 1 00:04:45.015 17:55:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n3 bs=1M count=1 00:04:45.015 1+0 records in 00:04:45.015 1+0 records out 00:04:45.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00386461 s, 271 MB/s 00:04:45.016 17:55:01 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:45.016 17:55:01 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:45.016 17:55:01 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:04:45.016 17:55:01 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:04:45.016 17:55:01 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:04:45.016 No valid GPT data, bailing 00:04:45.016 17:55:01 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:04:45.016 17:55:01 -- scripts/common.sh@394 -- # pt= 00:04:45.016 17:55:01 -- scripts/common.sh@395 -- # return 1 00:04:45.016 17:55:01 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:04:45.016 1+0 records in 00:04:45.016 1+0 records out 00:04:45.016 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00490798 s, 214 MB/s 00:04:45.016 17:55:01 -- spdk/autotest.sh@105 -- # sync 00:04:45.274 17:55:01 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:45.274 17:55:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:45.274 17:55:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:47.175 17:55:03 -- spdk/autotest.sh@111 -- # uname -s 00:04:47.175 17:55:03 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:47.175 17:55:03 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:47.175 17:55:03 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:47.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:48.312 Hugepages 00:04:48.312 node hugesize free / total 00:04:48.312 node0 1048576kB 0 / 0 00:04:48.312 node0 2048kB 0 / 0 00:04:48.312 00:04:48.312 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:48.312 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:48.312 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:04:48.571 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:48.571 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme2 nvme2n1 nvme2n2 nvme2n3 00:04:48.571 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:04:48.571 17:55:04 -- spdk/autotest.sh@117 -- # uname -s 00:04:48.571 17:55:04 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:48.571 17:55:04 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:48.571 17:55:04 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:49.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:49.704 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.704 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.704 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.704 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:49.963 17:55:06 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:50.899 17:55:07 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:50.899 17:55:07 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:50.899 17:55:07 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:50.899 17:55:07 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:50.899 17:55:07 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:50.899 17:55:07 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:50.899 17:55:07 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:50.899 17:55:07 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:50.899 17:55:07 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:50.899 17:55:07 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:50.899 17:55:07 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:50.899 17:55:07 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:51.466 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:51.466 Waiting for block devices as requested 00:04:51.466 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:51.725 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:51.725 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:51.725 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:56.994 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:56.994 17:55:13 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:56.994 17:55:13 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:56.994 17:55:13 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:56.994 17:55:13 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:56.994 17:55:13 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:56.994 17:55:13 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:56.994 17:55:13 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:56.994 17:55:13 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:56.994 17:55:13 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:56.994 17:55:13 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:56.994 17:55:13 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:56.994 17:55:13 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:56.994 17:55:13 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:56.994 17:55:13 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:56.994 17:55:13 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:56.994 17:55:13 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:56.994 17:55:13 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:56.994 17:55:13 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:56.994 17:55:13 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:56.994 17:55:13 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:56.994 17:55:13 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:56.994 17:55:13 -- common/autotest_common.sh@1541 -- # continue 00:04:56.994 17:55:13 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:56.994 17:55:13 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:56.994 17:55:13 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:56.994 17:55:13 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:56.994 17:55:13 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:56.994 17:55:13 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:56.994 17:55:13 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:56.994 17:55:13 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:56.994 17:55:13 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:56.994 17:55:13 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:56.994 17:55:13 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:56.994 17:55:13 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:56.994 17:55:13 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:56.994 17:55:13 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:56.994 17:55:13 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:56.994 17:55:13 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:56.994 17:55:13 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:56.994 17:55:13 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:56.994 17:55:13 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:56.994 17:55:13 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:56.994 17:55:13 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:56.994 17:55:13 -- common/autotest_common.sh@1541 -- # continue 00:04:56.994 17:55:13 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:56.994 17:55:13 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:56.994 17:55:13 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:56.994 17:55:13 -- common/autotest_common.sh@1485 -- # grep 0000:00:12.0/nvme/nvme 00:04:56.994 17:55:13 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:56.994 17:55:13 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:56.994 17:55:13 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:56.995 17:55:13 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme2 00:04:56.995 17:55:13 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme2 00:04:56.995 17:55:13 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme2 ]] 00:04:56.995 17:55:13 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme2 00:04:56.995 17:55:13 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:56.995 17:55:13 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:56.995 17:55:13 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:56.995 17:55:13 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:56.995 17:55:13 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:56.995 17:55:13 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme2 00:04:56.995 17:55:13 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:56.995 17:55:13 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:56.995 17:55:13 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:56.995 17:55:13 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:56.995 17:55:13 -- common/autotest_common.sh@1541 -- # continue 00:04:56.995 17:55:13 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:56.995 17:55:13 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:56.995 17:55:13 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:56.995 17:55:13 -- common/autotest_common.sh@1485 -- # grep 0000:00:13.0/nvme/nvme 00:04:56.995 17:55:13 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:56.995 17:55:13 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:56.995 17:55:13 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:56.995 17:55:13 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme3 00:04:56.995 17:55:13 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme3 00:04:56.995 17:55:13 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme3 ]] 00:04:56.995 17:55:13 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme3 00:04:56.995 17:55:13 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:56.995 17:55:13 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:56.995 17:55:13 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:56.995 17:55:13 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:56.995 17:55:13 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:56.995 17:55:13 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:56.995 17:55:13 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:56.995 17:55:13 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme3 00:04:56.995 17:55:13 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:56.995 17:55:13 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:56.995 17:55:13 -- common/autotest_common.sh@1541 -- # continue 00:04:56.995 17:55:13 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:56.995 17:55:13 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:56.995 17:55:13 -- common/autotest_common.sh@10 -- # set +x 00:04:56.995 17:55:13 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:56.995 17:55:13 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:56.995 17:55:13 -- common/autotest_common.sh@10 -- # set +x 00:04:56.995 17:55:13 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:57.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:58.129 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.129 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.129 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.129 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:58.387 17:55:14 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:58.387 17:55:14 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:58.387 17:55:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.387 17:55:14 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:58.387 17:55:14 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:58.387 17:55:14 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.387 17:55:14 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:58.387 17:55:14 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:58.387 17:55:14 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:58.387 17:55:14 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:58.387 17:55:14 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:58.387 17:55:14 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:58.387 17:55:14 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:58.387 17:55:14 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.387 17:55:14 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:58.387 17:55:14 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:58.387 17:55:14 -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:04:58.387 17:55:14 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:58.387 17:55:14 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:58.387 17:55:14 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:58.387 17:55:14 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:58.387 17:55:14 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.387 17:55:14 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:58.387 17:55:14 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:58.387 17:55:14 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:58.387 17:55:14 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.387 17:55:14 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:58.387 17:55:14 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:58.387 17:55:14 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:58.387 17:55:14 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.387 17:55:14 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:58.387 17:55:14 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:58.387 17:55:14 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:58.387 17:55:14 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:58.387 17:55:14 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:58.387 17:55:14 -- common/autotest_common.sh@1570 -- # return 0 00:04:58.387 17:55:14 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:58.387 17:55:14 -- common/autotest_common.sh@1578 -- # return 0 00:04:58.387 17:55:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:58.387 17:55:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:58.387 17:55:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.387 17:55:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:58.387 17:55:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:58.387 17:55:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:58.387 17:55:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.387 17:55:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:58.387 17:55:14 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.387 17:55:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.387 17:55:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.387 17:55:14 -- common/autotest_common.sh@10 -- # set +x 00:04:58.387 ************************************ 00:04:58.387 START TEST env 00:04:58.387 ************************************ 00:04:58.387 17:55:14 env -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:58.387 * Looking for test storage... 00:04:58.387 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:58.387 17:55:14 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:04:58.387 17:55:14 env -- common/autotest_common.sh@1691 -- # lcov --version 00:04:58.387 17:55:14 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:04:58.647 17:55:14 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:04:58.647 17:55:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:58.647 17:55:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:58.647 17:55:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:58.647 17:55:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:58.647 17:55:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:58.647 17:55:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:58.647 17:55:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:58.647 17:55:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:58.647 17:55:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:58.647 17:55:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:58.647 17:55:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:58.647 17:55:14 env -- scripts/common.sh@344 -- # case "$op" in 00:04:58.647 17:55:14 env -- scripts/common.sh@345 -- # : 1 00:04:58.647 17:55:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:58.647 17:55:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:58.647 17:55:14 env -- scripts/common.sh@365 -- # decimal 1 00:04:58.647 17:55:14 env -- scripts/common.sh@353 -- # local d=1 00:04:58.647 17:55:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:58.647 17:55:14 env -- scripts/common.sh@355 -- # echo 1 00:04:58.647 17:55:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:58.647 17:55:14 env -- scripts/common.sh@366 -- # decimal 2 00:04:58.647 17:55:14 env -- scripts/common.sh@353 -- # local d=2 00:04:58.647 17:55:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:58.647 17:55:14 env -- scripts/common.sh@355 -- # echo 2 00:04:58.647 17:55:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:58.647 17:55:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:58.647 17:55:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:58.647 17:55:14 env -- scripts/common.sh@368 -- # return 0 00:04:58.647 17:55:14 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:58.647 17:55:14 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:04:58.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.647 --rc genhtml_branch_coverage=1 00:04:58.647 --rc genhtml_function_coverage=1 00:04:58.647 --rc genhtml_legend=1 00:04:58.647 --rc geninfo_all_blocks=1 00:04:58.647 --rc geninfo_unexecuted_blocks=1 00:04:58.647 00:04:58.647 ' 00:04:58.647 17:55:14 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:04:58.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.647 --rc genhtml_branch_coverage=1 00:04:58.647 --rc genhtml_function_coverage=1 00:04:58.647 --rc genhtml_legend=1 00:04:58.647 --rc geninfo_all_blocks=1 00:04:58.647 --rc geninfo_unexecuted_blocks=1 00:04:58.647 00:04:58.647 ' 00:04:58.647 17:55:14 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:04:58.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.647 --rc genhtml_branch_coverage=1 00:04:58.647 --rc genhtml_function_coverage=1 00:04:58.647 --rc genhtml_legend=1 00:04:58.647 --rc geninfo_all_blocks=1 00:04:58.647 --rc geninfo_unexecuted_blocks=1 00:04:58.647 00:04:58.647 ' 00:04:58.647 17:55:14 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:04:58.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:58.647 --rc genhtml_branch_coverage=1 00:04:58.647 --rc genhtml_function_coverage=1 00:04:58.647 --rc genhtml_legend=1 00:04:58.647 --rc geninfo_all_blocks=1 00:04:58.647 --rc geninfo_unexecuted_blocks=1 00:04:58.647 00:04:58.647 ' 00:04:58.647 17:55:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.647 17:55:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.647 17:55:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.647 17:55:14 env -- common/autotest_common.sh@10 -- # set +x 00:04:58.647 ************************************ 00:04:58.647 START TEST env_memory 00:04:58.647 ************************************ 00:04:58.647 17:55:14 env.env_memory -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:58.647 00:04:58.647 00:04:58.647 CUnit - A unit testing framework for C - Version 2.1-3 00:04:58.647 http://cunit.sourceforge.net/ 00:04:58.647 00:04:58.647 00:04:58.647 Suite: memory 00:04:58.647 Test: alloc and free memory map ...[2024-10-28 17:55:15.031244] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:58.647 passed 00:04:58.647 Test: mem map translation ...[2024-10-28 17:55:15.091919] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:58.647 [2024-10-28 17:55:15.092036] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:58.647 [2024-10-28 17:55:15.092138] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:58.647 [2024-10-28 17:55:15.092177] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:58.906 passed 00:04:58.906 Test: mem map registration ...[2024-10-28 17:55:15.190856] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:58.906 [2024-10-28 17:55:15.190972] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:58.906 passed 00:04:58.906 Test: mem map adjacent registrations ...passed 00:04:58.906 00:04:58.906 Run Summary: Type Total Ran Passed Failed Inactive 00:04:58.906 suites 1 1 n/a 0 0 00:04:58.906 tests 4 4 4 0 0 00:04:58.906 asserts 152 152 152 0 n/a 00:04:58.906 00:04:58.906 Elapsed time = 0.339 seconds 00:04:58.906 00:04:58.906 real 0m0.379s 00:04:58.906 user 0m0.349s 00:04:58.906 sys 0m0.023s 00:04:58.906 17:55:15 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:04:58.906 17:55:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:58.906 ************************************ 00:04:58.906 END TEST env_memory 00:04:58.906 ************************************ 00:04:58.906 17:55:15 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:58.906 17:55:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:04:58.906 17:55:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:04:58.906 17:55:15 env -- common/autotest_common.sh@10 -- # set +x 00:04:59.167 ************************************ 00:04:59.167 START TEST env_vtophys 00:04:59.167 ************************************ 00:04:59.167 17:55:15 env.env_vtophys -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:59.167 EAL: lib.eal log level changed from notice to debug 00:04:59.167 EAL: Detected lcore 0 as core 0 on socket 0 00:04:59.167 EAL: Detected lcore 1 as core 0 on socket 0 00:04:59.167 EAL: Detected lcore 2 as core 0 on socket 0 00:04:59.167 EAL: Detected lcore 3 as core 0 on socket 0 00:04:59.167 EAL: Detected lcore 4 as core 0 on socket 0 00:04:59.167 EAL: Detected lcore 5 as core 0 on socket 0 00:04:59.167 EAL: Detected lcore 6 as core 0 on socket 0 00:04:59.167 EAL: Detected lcore 7 as core 0 on socket 0 00:04:59.167 EAL: Detected lcore 8 as core 0 on socket 0 00:04:59.167 EAL: Detected lcore 9 as core 0 on socket 0 00:04:59.167 EAL: Maximum logical cores by configuration: 128 00:04:59.167 EAL: Detected CPU lcores: 10 00:04:59.167 EAL: Detected NUMA nodes: 1 00:04:59.167 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:59.167 EAL: Detected shared linkage of DPDK 00:04:59.167 EAL: No shared files mode enabled, IPC will be disabled 00:04:59.167 EAL: Selected IOVA mode 'PA' 00:04:59.167 EAL: Probing VFIO support... 00:04:59.167 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.167 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:59.167 EAL: Ask a virtual area of 0x2e000 bytes 00:04:59.167 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:59.167 EAL: Setting up physically contiguous memory... 00:04:59.167 EAL: Setting maximum number of open files to 524288 00:04:59.167 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:59.167 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:59.167 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.167 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:59.167 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.167 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.167 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:59.167 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:59.167 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.167 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:59.167 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.167 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.167 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:59.168 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:59.168 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.168 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:59.168 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.168 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.168 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:59.168 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:59.168 EAL: Ask a virtual area of 0x61000 bytes 00:04:59.168 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:59.168 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:59.168 EAL: Ask a virtual area of 0x400000000 bytes 00:04:59.168 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:59.168 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:59.168 EAL: Hugepages will be freed exactly as allocated. 00:04:59.168 EAL: No shared files mode enabled, IPC is disabled 00:04:59.168 EAL: No shared files mode enabled, IPC is disabled 00:04:59.168 EAL: TSC frequency is ~2200000 KHz 00:04:59.168 EAL: Main lcore 0 is ready (tid=7fdb86ea8a40;cpuset=[0]) 00:04:59.168 EAL: Trying to obtain current memory policy. 00:04:59.168 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.168 EAL: Restoring previous memory policy: 0 00:04:59.168 EAL: request: mp_malloc_sync 00:04:59.168 EAL: No shared files mode enabled, IPC is disabled 00:04:59.168 EAL: Heap on socket 0 was expanded by 2MB 00:04:59.168 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:59.168 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:59.168 EAL: Mem event callback 'spdk:(nil)' registered 00:04:59.168 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:59.168 00:04:59.168 00:04:59.168 CUnit - A unit testing framework for C - Version 2.1-3 00:04:59.168 http://cunit.sourceforge.net/ 00:04:59.168 00:04:59.168 00:04:59.168 Suite: components_suite 00:04:59.735 Test: vtophys_malloc_test ...passed 00:04:59.735 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:59.735 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.735 EAL: Restoring previous memory policy: 4 00:04:59.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.735 EAL: request: mp_malloc_sync 00:04:59.735 EAL: No shared files mode enabled, IPC is disabled 00:04:59.735 EAL: Heap on socket 0 was expanded by 4MB 00:04:59.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.735 EAL: request: mp_malloc_sync 00:04:59.735 EAL: No shared files mode enabled, IPC is disabled 00:04:59.735 EAL: Heap on socket 0 was shrunk by 4MB 00:04:59.735 EAL: Trying to obtain current memory policy. 00:04:59.735 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.735 EAL: Restoring previous memory policy: 4 00:04:59.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.735 EAL: request: mp_malloc_sync 00:04:59.735 EAL: No shared files mode enabled, IPC is disabled 00:04:59.735 EAL: Heap on socket 0 was expanded by 6MB 00:04:59.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.735 EAL: request: mp_malloc_sync 00:04:59.735 EAL: No shared files mode enabled, IPC is disabled 00:04:59.735 EAL: Heap on socket 0 was shrunk by 6MB 00:04:59.735 EAL: Trying to obtain current memory policy. 00:04:59.735 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.735 EAL: Restoring previous memory policy: 4 00:04:59.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.735 EAL: request: mp_malloc_sync 00:04:59.735 EAL: No shared files mode enabled, IPC is disabled 00:04:59.735 EAL: Heap on socket 0 was expanded by 10MB 00:04:59.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.735 EAL: request: mp_malloc_sync 00:04:59.735 EAL: No shared files mode enabled, IPC is disabled 00:04:59.735 EAL: Heap on socket 0 was shrunk by 10MB 00:04:59.736 EAL: Trying to obtain current memory policy. 00:04:59.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.736 EAL: Restoring previous memory policy: 4 00:04:59.736 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.736 EAL: request: mp_malloc_sync 00:04:59.736 EAL: No shared files mode enabled, IPC is disabled 00:04:59.736 EAL: Heap on socket 0 was expanded by 18MB 00:04:59.736 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.736 EAL: request: mp_malloc_sync 00:04:59.736 EAL: No shared files mode enabled, IPC is disabled 00:04:59.736 EAL: Heap on socket 0 was shrunk by 18MB 00:04:59.736 EAL: Trying to obtain current memory policy. 00:04:59.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.736 EAL: Restoring previous memory policy: 4 00:04:59.736 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.736 EAL: request: mp_malloc_sync 00:04:59.736 EAL: No shared files mode enabled, IPC is disabled 00:04:59.736 EAL: Heap on socket 0 was expanded by 34MB 00:04:59.736 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.736 EAL: request: mp_malloc_sync 00:04:59.736 EAL: No shared files mode enabled, IPC is disabled 00:04:59.736 EAL: Heap on socket 0 was shrunk by 34MB 00:04:59.736 EAL: Trying to obtain current memory policy. 00:04:59.736 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.994 EAL: Restoring previous memory policy: 4 00:04:59.994 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.994 EAL: request: mp_malloc_sync 00:04:59.994 EAL: No shared files mode enabled, IPC is disabled 00:04:59.994 EAL: Heap on socket 0 was expanded by 66MB 00:04:59.994 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.994 EAL: request: mp_malloc_sync 00:04:59.994 EAL: No shared files mode enabled, IPC is disabled 00:04:59.994 EAL: Heap on socket 0 was shrunk by 66MB 00:04:59.994 EAL: Trying to obtain current memory policy. 00:04:59.994 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:59.994 EAL: Restoring previous memory policy: 4 00:04:59.994 EAL: Calling mem event callback 'spdk:(nil)' 00:04:59.994 EAL: request: mp_malloc_sync 00:04:59.994 EAL: No shared files mode enabled, IPC is disabled 00:04:59.994 EAL: Heap on socket 0 was expanded by 130MB 00:05:00.253 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.253 EAL: request: mp_malloc_sync 00:05:00.253 EAL: No shared files mode enabled, IPC is disabled 00:05:00.253 EAL: Heap on socket 0 was shrunk by 130MB 00:05:00.511 EAL: Trying to obtain current memory policy. 00:05:00.511 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:00.511 EAL: Restoring previous memory policy: 4 00:05:00.511 EAL: Calling mem event callback 'spdk:(nil)' 00:05:00.511 EAL: request: mp_malloc_sync 00:05:00.511 EAL: No shared files mode enabled, IPC is disabled 00:05:00.511 EAL: Heap on socket 0 was expanded by 258MB 00:05:01.078 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.078 EAL: request: mp_malloc_sync 00:05:01.078 EAL: No shared files mode enabled, IPC is disabled 00:05:01.078 EAL: Heap on socket 0 was shrunk by 258MB 00:05:01.336 EAL: Trying to obtain current memory policy. 00:05:01.336 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:01.336 EAL: Restoring previous memory policy: 4 00:05:01.336 EAL: Calling mem event callback 'spdk:(nil)' 00:05:01.337 EAL: request: mp_malloc_sync 00:05:01.337 EAL: No shared files mode enabled, IPC is disabled 00:05:01.337 EAL: Heap on socket 0 was expanded by 514MB 00:05:02.286 EAL: Calling mem event callback 'spdk:(nil)' 00:05:02.286 EAL: request: mp_malloc_sync 00:05:02.286 EAL: No shared files mode enabled, IPC is disabled 00:05:02.286 EAL: Heap on socket 0 was shrunk by 514MB 00:05:02.854 EAL: Trying to obtain current memory policy. 00:05:02.854 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.113 EAL: Restoring previous memory policy: 4 00:05:03.113 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.113 EAL: request: mp_malloc_sync 00:05:03.113 EAL: No shared files mode enabled, IPC is disabled 00:05:03.113 EAL: Heap on socket 0 was expanded by 1026MB 00:05:05.014 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.014 EAL: request: mp_malloc_sync 00:05:05.014 EAL: No shared files mode enabled, IPC is disabled 00:05:05.014 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:06.406 passed 00:05:06.406 00:05:06.406 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.406 suites 1 1 n/a 0 0 00:05:06.406 tests 2 2 2 0 0 00:05:06.406 asserts 5712 5712 5712 0 n/a 00:05:06.406 00:05:06.406 Elapsed time = 6.912 seconds 00:05:06.406 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.406 EAL: request: mp_malloc_sync 00:05:06.406 EAL: No shared files mode enabled, IPC is disabled 00:05:06.406 EAL: Heap on socket 0 was shrunk by 2MB 00:05:06.406 EAL: No shared files mode enabled, IPC is disabled 00:05:06.406 EAL: No shared files mode enabled, IPC is disabled 00:05:06.406 EAL: No shared files mode enabled, IPC is disabled 00:05:06.406 00:05:06.406 real 0m7.244s 00:05:06.406 user 0m6.389s 00:05:06.406 sys 0m0.696s 00:05:06.406 17:55:22 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.406 ************************************ 00:05:06.406 END TEST env_vtophys 00:05:06.406 17:55:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:06.406 ************************************ 00:05:06.406 17:55:22 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:06.406 17:55:22 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.406 17:55:22 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.406 17:55:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.406 ************************************ 00:05:06.406 START TEST env_pci 00:05:06.406 ************************************ 00:05:06.406 17:55:22 env.env_pci -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:06.406 00:05:06.406 00:05:06.406 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.406 http://cunit.sourceforge.net/ 00:05:06.406 00:05:06.406 00:05:06.406 Suite: pci 00:05:06.406 Test: pci_hook ...[2024-10-28 17:55:22.722703] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1111:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57848 has claimed it 00:05:06.406 EAL: Cannot find device (10000:00:01.0) 00:05:06.406 EAL: Failed to attach device on primary process 00:05:06.406 passed 00:05:06.406 00:05:06.406 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.406 suites 1 1 n/a 0 0 00:05:06.406 tests 1 1 1 0 0 00:05:06.406 asserts 25 25 25 0 n/a 00:05:06.406 00:05:06.406 Elapsed time = 0.010 seconds 00:05:06.406 00:05:06.406 real 0m0.091s 00:05:06.406 user 0m0.043s 00:05:06.406 sys 0m0.048s 00:05:06.406 17:55:22 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.406 17:55:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:06.406 ************************************ 00:05:06.406 END TEST env_pci 00:05:06.406 ************************************ 00:05:06.406 17:55:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:06.406 17:55:22 env -- env/env.sh@15 -- # uname 00:05:06.406 17:55:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:06.406 17:55:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:06.406 17:55:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.406 17:55:22 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:05:06.406 17:55:22 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.406 17:55:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.406 ************************************ 00:05:06.406 START TEST env_dpdk_post_init 00:05:06.406 ************************************ 00:05:06.406 17:55:22 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:06.406 EAL: Detected CPU lcores: 10 00:05:06.406 EAL: Detected NUMA nodes: 1 00:05:06.406 EAL: Detected shared linkage of DPDK 00:05:06.664 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.664 EAL: Selected IOVA mode 'PA' 00:05:06.664 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.664 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:06.664 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:06.664 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:05:06.664 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:05:06.664 Starting DPDK initialization... 00:05:06.664 Starting SPDK post initialization... 00:05:06.664 SPDK NVMe probe 00:05:06.664 Attaching to 0000:00:10.0 00:05:06.664 Attaching to 0000:00:11.0 00:05:06.664 Attaching to 0000:00:12.0 00:05:06.664 Attaching to 0000:00:13.0 00:05:06.664 Attached to 0000:00:10.0 00:05:06.664 Attached to 0000:00:11.0 00:05:06.664 Attached to 0000:00:13.0 00:05:06.664 Attached to 0000:00:12.0 00:05:06.664 Cleaning up... 00:05:06.664 00:05:06.664 real 0m0.295s 00:05:06.664 user 0m0.105s 00:05:06.664 sys 0m0.090s 00:05:06.664 17:55:23 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:06.664 17:55:23 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:06.664 ************************************ 00:05:06.664 END TEST env_dpdk_post_init 00:05:06.664 ************************************ 00:05:06.922 17:55:23 env -- env/env.sh@26 -- # uname 00:05:06.922 17:55:23 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:06.922 17:55:23 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:06.922 17:55:23 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:06.922 17:55:23 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:06.922 17:55:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.922 ************************************ 00:05:06.922 START TEST env_mem_callbacks 00:05:06.922 ************************************ 00:05:06.922 17:55:23 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:06.922 EAL: Detected CPU lcores: 10 00:05:06.922 EAL: Detected NUMA nodes: 1 00:05:06.922 EAL: Detected shared linkage of DPDK 00:05:06.922 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:06.922 EAL: Selected IOVA mode 'PA' 00:05:06.922 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.922 00:05:06.922 00:05:06.922 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.922 http://cunit.sourceforge.net/ 00:05:06.922 00:05:06.922 00:05:06.922 Suite: memory 00:05:06.922 Test: test ... 00:05:06.922 register 0x200000200000 2097152 00:05:06.922 malloc 3145728 00:05:06.922 register 0x200000400000 4194304 00:05:06.922 buf 0x2000004fffc0 len 3145728 PASSED 00:05:06.922 malloc 64 00:05:06.922 buf 0x2000004ffec0 len 64 PASSED 00:05:06.922 malloc 4194304 00:05:06.922 register 0x200000800000 6291456 00:05:06.922 buf 0x2000009fffc0 len 4194304 PASSED 00:05:06.922 free 0x2000004fffc0 3145728 00:05:06.922 free 0x2000004ffec0 64 00:05:06.922 unregister 0x200000400000 4194304 PASSED 00:05:06.922 free 0x2000009fffc0 4194304 00:05:06.922 unregister 0x200000800000 6291456 PASSED 00:05:06.922 malloc 8388608 00:05:06.922 register 0x200000400000 10485760 00:05:06.922 buf 0x2000005fffc0 len 8388608 PASSED 00:05:06.922 free 0x2000005fffc0 8388608 00:05:07.181 unregister 0x200000400000 10485760 PASSED 00:05:07.181 passed 00:05:07.181 00:05:07.181 Run Summary: Type Total Ran Passed Failed Inactive 00:05:07.181 suites 1 1 n/a 0 0 00:05:07.181 tests 1 1 1 0 0 00:05:07.181 asserts 15 15 15 0 n/a 00:05:07.181 00:05:07.181 Elapsed time = 0.058 seconds 00:05:07.181 00:05:07.181 real 0m0.259s 00:05:07.181 user 0m0.096s 00:05:07.181 sys 0m0.061s 00:05:07.181 17:55:23 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.181 17:55:23 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:07.181 ************************************ 00:05:07.181 END TEST env_mem_callbacks 00:05:07.181 ************************************ 00:05:07.181 00:05:07.181 real 0m8.689s 00:05:07.181 user 0m7.167s 00:05:07.181 sys 0m1.145s 00:05:07.181 17:55:23 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:07.181 17:55:23 env -- common/autotest_common.sh@10 -- # set +x 00:05:07.181 ************************************ 00:05:07.181 END TEST env 00:05:07.181 ************************************ 00:05:07.181 17:55:23 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:07.181 17:55:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:07.181 17:55:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:07.181 17:55:23 -- common/autotest_common.sh@10 -- # set +x 00:05:07.181 ************************************ 00:05:07.181 START TEST rpc 00:05:07.181 ************************************ 00:05:07.181 17:55:23 rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:07.181 * Looking for test storage... 00:05:07.181 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:07.181 17:55:23 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:07.181 17:55:23 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:07.181 17:55:23 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:07.441 17:55:23 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:07.441 17:55:23 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:07.441 17:55:23 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:07.441 17:55:23 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:07.441 17:55:23 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:07.441 17:55:23 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:07.441 17:55:23 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:07.441 17:55:23 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:07.441 17:55:23 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:07.441 17:55:23 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:07.441 17:55:23 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:07.441 17:55:23 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:07.441 17:55:23 rpc -- scripts/common.sh@345 -- # : 1 00:05:07.441 17:55:23 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:07.441 17:55:23 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:07.441 17:55:23 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:07.441 17:55:23 rpc -- scripts/common.sh@353 -- # local d=1 00:05:07.441 17:55:23 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:07.441 17:55:23 rpc -- scripts/common.sh@355 -- # echo 1 00:05:07.441 17:55:23 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:07.441 17:55:23 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:07.441 17:55:23 rpc -- scripts/common.sh@353 -- # local d=2 00:05:07.441 17:55:23 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:07.441 17:55:23 rpc -- scripts/common.sh@355 -- # echo 2 00:05:07.441 17:55:23 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:07.441 17:55:23 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:07.441 17:55:23 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:07.441 17:55:23 rpc -- scripts/common.sh@368 -- # return 0 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:07.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.441 --rc genhtml_branch_coverage=1 00:05:07.441 --rc genhtml_function_coverage=1 00:05:07.441 --rc genhtml_legend=1 00:05:07.441 --rc geninfo_all_blocks=1 00:05:07.441 --rc geninfo_unexecuted_blocks=1 00:05:07.441 00:05:07.441 ' 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:07.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.441 --rc genhtml_branch_coverage=1 00:05:07.441 --rc genhtml_function_coverage=1 00:05:07.441 --rc genhtml_legend=1 00:05:07.441 --rc geninfo_all_blocks=1 00:05:07.441 --rc geninfo_unexecuted_blocks=1 00:05:07.441 00:05:07.441 ' 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:07.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.441 --rc genhtml_branch_coverage=1 00:05:07.441 --rc genhtml_function_coverage=1 00:05:07.441 --rc genhtml_legend=1 00:05:07.441 --rc geninfo_all_blocks=1 00:05:07.441 --rc geninfo_unexecuted_blocks=1 00:05:07.441 00:05:07.441 ' 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:07.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:07.441 --rc genhtml_branch_coverage=1 00:05:07.441 --rc genhtml_function_coverage=1 00:05:07.441 --rc genhtml_legend=1 00:05:07.441 --rc geninfo_all_blocks=1 00:05:07.441 --rc geninfo_unexecuted_blocks=1 00:05:07.441 00:05:07.441 ' 00:05:07.441 17:55:23 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57975 00:05:07.441 17:55:23 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:07.441 17:55:23 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:07.441 17:55:23 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57975 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@833 -- # '[' -z 57975 ']' 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:07.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:07.441 17:55:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:07.441 [2024-10-28 17:55:23.834452] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:07.441 [2024-10-28 17:55:23.834646] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57975 ] 00:05:07.699 [2024-10-28 17:55:24.024426] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.699 [2024-10-28 17:55:24.148140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:07.699 [2024-10-28 17:55:24.148223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57975' to capture a snapshot of events at runtime. 00:05:07.699 [2024-10-28 17:55:24.148244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:07.699 [2024-10-28 17:55:24.148261] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:07.699 [2024-10-28 17:55:24.148274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57975 for offline analysis/debug. 00:05:07.699 [2024-10-28 17:55:24.149681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.632 17:55:24 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:08.633 17:55:24 rpc -- common/autotest_common.sh@866 -- # return 0 00:05:08.633 17:55:24 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.633 17:55:24 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:08.633 17:55:24 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:08.633 17:55:24 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:08.633 17:55:24 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:08.633 17:55:24 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:08.633 17:55:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.633 ************************************ 00:05:08.633 START TEST rpc_integrity 00:05:08.633 ************************************ 00:05:08.633 17:55:24 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:08.633 17:55:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:08.633 17:55:24 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.633 17:55:24 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.633 17:55:24 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.633 17:55:24 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:08.633 17:55:24 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:08.633 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:08.633 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:08.633 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.633 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.633 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.633 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:08.633 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:08.633 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.633 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.633 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.633 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:08.633 { 00:05:08.633 "name": "Malloc0", 00:05:08.633 "aliases": [ 00:05:08.633 "a20a82cd-1b0a-4f09-a8ea-2adbba8f5436" 00:05:08.633 ], 00:05:08.633 "product_name": "Malloc disk", 00:05:08.633 "block_size": 512, 00:05:08.633 "num_blocks": 16384, 00:05:08.633 "uuid": "a20a82cd-1b0a-4f09-a8ea-2adbba8f5436", 00:05:08.633 "assigned_rate_limits": { 00:05:08.633 "rw_ios_per_sec": 0, 00:05:08.633 "rw_mbytes_per_sec": 0, 00:05:08.633 "r_mbytes_per_sec": 0, 00:05:08.633 "w_mbytes_per_sec": 0 00:05:08.633 }, 00:05:08.633 "claimed": false, 00:05:08.633 "zoned": false, 00:05:08.633 "supported_io_types": { 00:05:08.633 "read": true, 00:05:08.633 "write": true, 00:05:08.633 "unmap": true, 00:05:08.633 "flush": true, 00:05:08.633 "reset": true, 00:05:08.633 "nvme_admin": false, 00:05:08.633 "nvme_io": false, 00:05:08.633 "nvme_io_md": false, 00:05:08.633 "write_zeroes": true, 00:05:08.633 "zcopy": true, 00:05:08.633 "get_zone_info": false, 00:05:08.633 "zone_management": false, 00:05:08.633 "zone_append": false, 00:05:08.633 "compare": false, 00:05:08.633 "compare_and_write": false, 00:05:08.633 "abort": true, 00:05:08.633 "seek_hole": false, 00:05:08.633 "seek_data": false, 00:05:08.633 "copy": true, 00:05:08.633 "nvme_iov_md": false 00:05:08.633 }, 00:05:08.633 "memory_domains": [ 00:05:08.633 { 00:05:08.633 "dma_device_id": "system", 00:05:08.633 "dma_device_type": 1 00:05:08.633 }, 00:05:08.633 { 00:05:08.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.633 "dma_device_type": 2 00:05:08.633 } 00:05:08.633 ], 00:05:08.633 "driver_specific": {} 00:05:08.633 } 00:05:08.633 ]' 00:05:08.633 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:08.891 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:08.891 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:08.891 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.891 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.891 [2024-10-28 17:55:25.141558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:08.891 [2024-10-28 17:55:25.141657] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:08.891 [2024-10-28 17:55:25.141699] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:05:08.891 [2024-10-28 17:55:25.141722] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:08.891 [2024-10-28 17:55:25.144715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:08.891 [2024-10-28 17:55:25.144775] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:08.891 Passthru0 00:05:08.891 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.891 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:08.891 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.891 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.891 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.891 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:08.891 { 00:05:08.891 "name": "Malloc0", 00:05:08.891 "aliases": [ 00:05:08.891 "a20a82cd-1b0a-4f09-a8ea-2adbba8f5436" 00:05:08.891 ], 00:05:08.891 "product_name": "Malloc disk", 00:05:08.891 "block_size": 512, 00:05:08.891 "num_blocks": 16384, 00:05:08.891 "uuid": "a20a82cd-1b0a-4f09-a8ea-2adbba8f5436", 00:05:08.891 "assigned_rate_limits": { 00:05:08.891 "rw_ios_per_sec": 0, 00:05:08.891 "rw_mbytes_per_sec": 0, 00:05:08.891 "r_mbytes_per_sec": 0, 00:05:08.891 "w_mbytes_per_sec": 0 00:05:08.891 }, 00:05:08.891 "claimed": true, 00:05:08.891 "claim_type": "exclusive_write", 00:05:08.891 "zoned": false, 00:05:08.891 "supported_io_types": { 00:05:08.891 "read": true, 00:05:08.891 "write": true, 00:05:08.891 "unmap": true, 00:05:08.891 "flush": true, 00:05:08.891 "reset": true, 00:05:08.891 "nvme_admin": false, 00:05:08.891 "nvme_io": false, 00:05:08.891 "nvme_io_md": false, 00:05:08.891 "write_zeroes": true, 00:05:08.891 "zcopy": true, 00:05:08.891 "get_zone_info": false, 00:05:08.891 "zone_management": false, 00:05:08.891 "zone_append": false, 00:05:08.891 "compare": false, 00:05:08.891 "compare_and_write": false, 00:05:08.891 "abort": true, 00:05:08.891 "seek_hole": false, 00:05:08.891 "seek_data": false, 00:05:08.891 "copy": true, 00:05:08.891 "nvme_iov_md": false 00:05:08.891 }, 00:05:08.891 "memory_domains": [ 00:05:08.891 { 00:05:08.891 "dma_device_id": "system", 00:05:08.891 "dma_device_type": 1 00:05:08.891 }, 00:05:08.891 { 00:05:08.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.891 "dma_device_type": 2 00:05:08.891 } 00:05:08.891 ], 00:05:08.891 "driver_specific": {} 00:05:08.891 }, 00:05:08.891 { 00:05:08.891 "name": "Passthru0", 00:05:08.891 "aliases": [ 00:05:08.891 "77ddef41-0e6e-5bbf-9ebd-aa4a8e6fcd37" 00:05:08.891 ], 00:05:08.891 "product_name": "passthru", 00:05:08.891 "block_size": 512, 00:05:08.891 "num_blocks": 16384, 00:05:08.891 "uuid": "77ddef41-0e6e-5bbf-9ebd-aa4a8e6fcd37", 00:05:08.891 "assigned_rate_limits": { 00:05:08.891 "rw_ios_per_sec": 0, 00:05:08.891 "rw_mbytes_per_sec": 0, 00:05:08.891 "r_mbytes_per_sec": 0, 00:05:08.891 "w_mbytes_per_sec": 0 00:05:08.891 }, 00:05:08.891 "claimed": false, 00:05:08.891 "zoned": false, 00:05:08.891 "supported_io_types": { 00:05:08.891 "read": true, 00:05:08.891 "write": true, 00:05:08.891 "unmap": true, 00:05:08.891 "flush": true, 00:05:08.891 "reset": true, 00:05:08.891 "nvme_admin": false, 00:05:08.891 "nvme_io": false, 00:05:08.891 "nvme_io_md": false, 00:05:08.891 "write_zeroes": true, 00:05:08.891 "zcopy": true, 00:05:08.891 "get_zone_info": false, 00:05:08.891 "zone_management": false, 00:05:08.891 "zone_append": false, 00:05:08.891 "compare": false, 00:05:08.891 "compare_and_write": false, 00:05:08.891 "abort": true, 00:05:08.891 "seek_hole": false, 00:05:08.891 "seek_data": false, 00:05:08.891 "copy": true, 00:05:08.891 "nvme_iov_md": false 00:05:08.891 }, 00:05:08.891 "memory_domains": [ 00:05:08.891 { 00:05:08.891 "dma_device_id": "system", 00:05:08.891 "dma_device_type": 1 00:05:08.891 }, 00:05:08.891 { 00:05:08.891 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:08.891 "dma_device_type": 2 00:05:08.891 } 00:05:08.892 ], 00:05:08.892 "driver_specific": { 00:05:08.892 "passthru": { 00:05:08.892 "name": "Passthru0", 00:05:08.892 "base_bdev_name": "Malloc0" 00:05:08.892 } 00:05:08.892 } 00:05:08.892 } 00:05:08.892 ]' 00:05:08.892 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:08.892 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:08.892 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:08.892 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.892 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.892 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.892 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:08.892 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.892 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.892 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.892 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:08.892 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.892 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.892 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:08.892 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:08.892 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:08.892 17:55:25 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:08.892 00:05:08.892 real 0m0.370s 00:05:08.892 user 0m0.236s 00:05:08.892 sys 0m0.039s 00:05:08.892 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:08.892 17:55:25 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:08.892 ************************************ 00:05:08.892 END TEST rpc_integrity 00:05:08.892 ************************************ 00:05:09.151 17:55:25 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:09.151 17:55:25 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.151 17:55:25 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.151 17:55:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.151 ************************************ 00:05:09.151 START TEST rpc_plugins 00:05:09.151 ************************************ 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:05:09.151 17:55:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.151 17:55:25 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:09.151 17:55:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.151 17:55:25 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:09.151 { 00:05:09.151 "name": "Malloc1", 00:05:09.151 "aliases": [ 00:05:09.151 "7c20f224-2f00-4819-a6e2-e569e56299e0" 00:05:09.151 ], 00:05:09.151 "product_name": "Malloc disk", 00:05:09.151 "block_size": 4096, 00:05:09.151 "num_blocks": 256, 00:05:09.151 "uuid": "7c20f224-2f00-4819-a6e2-e569e56299e0", 00:05:09.151 "assigned_rate_limits": { 00:05:09.151 "rw_ios_per_sec": 0, 00:05:09.151 "rw_mbytes_per_sec": 0, 00:05:09.151 "r_mbytes_per_sec": 0, 00:05:09.151 "w_mbytes_per_sec": 0 00:05:09.151 }, 00:05:09.151 "claimed": false, 00:05:09.151 "zoned": false, 00:05:09.151 "supported_io_types": { 00:05:09.151 "read": true, 00:05:09.151 "write": true, 00:05:09.151 "unmap": true, 00:05:09.151 "flush": true, 00:05:09.151 "reset": true, 00:05:09.151 "nvme_admin": false, 00:05:09.151 "nvme_io": false, 00:05:09.151 "nvme_io_md": false, 00:05:09.151 "write_zeroes": true, 00:05:09.151 "zcopy": true, 00:05:09.151 "get_zone_info": false, 00:05:09.151 "zone_management": false, 00:05:09.151 "zone_append": false, 00:05:09.151 "compare": false, 00:05:09.151 "compare_and_write": false, 00:05:09.151 "abort": true, 00:05:09.151 "seek_hole": false, 00:05:09.151 "seek_data": false, 00:05:09.151 "copy": true, 00:05:09.151 "nvme_iov_md": false 00:05:09.151 }, 00:05:09.151 "memory_domains": [ 00:05:09.151 { 00:05:09.151 "dma_device_id": "system", 00:05:09.151 "dma_device_type": 1 00:05:09.151 }, 00:05:09.151 { 00:05:09.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.151 "dma_device_type": 2 00:05:09.151 } 00:05:09.151 ], 00:05:09.151 "driver_specific": {} 00:05:09.151 } 00:05:09.151 ]' 00:05:09.151 17:55:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:09.151 17:55:25 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:09.151 17:55:25 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.151 17:55:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.151 17:55:25 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:09.151 17:55:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:09.151 17:55:25 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:09.151 00:05:09.151 real 0m0.164s 00:05:09.151 user 0m0.108s 00:05:09.151 sys 0m0.019s 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.151 17:55:25 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.151 ************************************ 00:05:09.151 END TEST rpc_plugins 00:05:09.151 ************************************ 00:05:09.151 17:55:25 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:09.151 17:55:25 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.151 17:55:25 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.151 17:55:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.151 ************************************ 00:05:09.151 START TEST rpc_trace_cmd_test 00:05:09.151 ************************************ 00:05:09.151 17:55:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:05:09.151 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:09.151 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:09.151 17:55:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.151 17:55:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:09.410 17:55:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.410 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:09.410 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57975", 00:05:09.410 "tpoint_group_mask": "0x8", 00:05:09.410 "iscsi_conn": { 00:05:09.410 "mask": "0x2", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "scsi": { 00:05:09.410 "mask": "0x4", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "bdev": { 00:05:09.410 "mask": "0x8", 00:05:09.410 "tpoint_mask": "0xffffffffffffffff" 00:05:09.410 }, 00:05:09.410 "nvmf_rdma": { 00:05:09.410 "mask": "0x10", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "nvmf_tcp": { 00:05:09.410 "mask": "0x20", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "ftl": { 00:05:09.410 "mask": "0x40", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "blobfs": { 00:05:09.410 "mask": "0x80", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "dsa": { 00:05:09.410 "mask": "0x200", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "thread": { 00:05:09.410 "mask": "0x400", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "nvme_pcie": { 00:05:09.410 "mask": "0x800", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "iaa": { 00:05:09.410 "mask": "0x1000", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "nvme_tcp": { 00:05:09.410 "mask": "0x2000", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "bdev_nvme": { 00:05:09.410 "mask": "0x4000", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "sock": { 00:05:09.410 "mask": "0x8000", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "blob": { 00:05:09.410 "mask": "0x10000", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "bdev_raid": { 00:05:09.410 "mask": "0x20000", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 }, 00:05:09.410 "scheduler": { 00:05:09.410 "mask": "0x40000", 00:05:09.410 "tpoint_mask": "0x0" 00:05:09.410 } 00:05:09.410 }' 00:05:09.410 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:09.410 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:09.410 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:09.410 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:09.410 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:09.410 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:09.410 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:09.410 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:09.410 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:09.670 17:55:25 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:09.670 00:05:09.670 real 0m0.287s 00:05:09.670 user 0m0.250s 00:05:09.670 sys 0m0.029s 00:05:09.670 17:55:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.670 17:55:25 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:09.670 ************************************ 00:05:09.670 END TEST rpc_trace_cmd_test 00:05:09.670 ************************************ 00:05:09.670 17:55:25 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:09.670 17:55:25 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:09.670 17:55:25 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:09.670 17:55:25 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:09.670 17:55:25 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:09.670 17:55:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.670 ************************************ 00:05:09.670 START TEST rpc_daemon_integrity 00:05:09.670 ************************************ 00:05:09.670 17:55:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:05:09.670 17:55:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.670 17:55:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.670 17:55:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.670 17:55:25 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.670 17:55:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.670 17:55:25 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.670 { 00:05:09.670 "name": "Malloc2", 00:05:09.670 "aliases": [ 00:05:09.670 "720559d6-9bac-4096-958a-9b2ecd870b83" 00:05:09.670 ], 00:05:09.670 "product_name": "Malloc disk", 00:05:09.670 "block_size": 512, 00:05:09.670 "num_blocks": 16384, 00:05:09.670 "uuid": "720559d6-9bac-4096-958a-9b2ecd870b83", 00:05:09.670 "assigned_rate_limits": { 00:05:09.670 "rw_ios_per_sec": 0, 00:05:09.670 "rw_mbytes_per_sec": 0, 00:05:09.670 "r_mbytes_per_sec": 0, 00:05:09.670 "w_mbytes_per_sec": 0 00:05:09.670 }, 00:05:09.670 "claimed": false, 00:05:09.670 "zoned": false, 00:05:09.670 "supported_io_types": { 00:05:09.670 "read": true, 00:05:09.670 "write": true, 00:05:09.670 "unmap": true, 00:05:09.670 "flush": true, 00:05:09.670 "reset": true, 00:05:09.670 "nvme_admin": false, 00:05:09.670 "nvme_io": false, 00:05:09.670 "nvme_io_md": false, 00:05:09.670 "write_zeroes": true, 00:05:09.670 "zcopy": true, 00:05:09.670 "get_zone_info": false, 00:05:09.670 "zone_management": false, 00:05:09.670 "zone_append": false, 00:05:09.670 "compare": false, 00:05:09.670 "compare_and_write": false, 00:05:09.670 "abort": true, 00:05:09.670 "seek_hole": false, 00:05:09.670 "seek_data": false, 00:05:09.670 "copy": true, 00:05:09.670 "nvme_iov_md": false 00:05:09.670 }, 00:05:09.670 "memory_domains": [ 00:05:09.670 { 00:05:09.670 "dma_device_id": "system", 00:05:09.670 "dma_device_type": 1 00:05:09.670 }, 00:05:09.670 { 00:05:09.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.670 "dma_device_type": 2 00:05:09.670 } 00:05:09.670 ], 00:05:09.670 "driver_specific": {} 00:05:09.670 } 00:05:09.670 ]' 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.670 [2024-10-28 17:55:26.101012] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:09.670 [2024-10-28 17:55:26.101089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.670 [2024-10-28 17:55:26.101128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:05:09.670 [2024-10-28 17:55:26.101144] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.670 [2024-10-28 17:55:26.103916] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.670 [2024-10-28 17:55:26.103966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.670 Passthru0 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.670 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.670 { 00:05:09.670 "name": "Malloc2", 00:05:09.670 "aliases": [ 00:05:09.670 "720559d6-9bac-4096-958a-9b2ecd870b83" 00:05:09.670 ], 00:05:09.670 "product_name": "Malloc disk", 00:05:09.670 "block_size": 512, 00:05:09.670 "num_blocks": 16384, 00:05:09.670 "uuid": "720559d6-9bac-4096-958a-9b2ecd870b83", 00:05:09.670 "assigned_rate_limits": { 00:05:09.670 "rw_ios_per_sec": 0, 00:05:09.670 "rw_mbytes_per_sec": 0, 00:05:09.670 "r_mbytes_per_sec": 0, 00:05:09.670 "w_mbytes_per_sec": 0 00:05:09.670 }, 00:05:09.670 "claimed": true, 00:05:09.670 "claim_type": "exclusive_write", 00:05:09.670 "zoned": false, 00:05:09.670 "supported_io_types": { 00:05:09.670 "read": true, 00:05:09.670 "write": true, 00:05:09.670 "unmap": true, 00:05:09.670 "flush": true, 00:05:09.670 "reset": true, 00:05:09.670 "nvme_admin": false, 00:05:09.670 "nvme_io": false, 00:05:09.670 "nvme_io_md": false, 00:05:09.670 "write_zeroes": true, 00:05:09.670 "zcopy": true, 00:05:09.670 "get_zone_info": false, 00:05:09.670 "zone_management": false, 00:05:09.670 "zone_append": false, 00:05:09.670 "compare": false, 00:05:09.670 "compare_and_write": false, 00:05:09.670 "abort": true, 00:05:09.670 "seek_hole": false, 00:05:09.670 "seek_data": false, 00:05:09.670 "copy": true, 00:05:09.670 "nvme_iov_md": false 00:05:09.670 }, 00:05:09.670 "memory_domains": [ 00:05:09.670 { 00:05:09.670 "dma_device_id": "system", 00:05:09.670 "dma_device_type": 1 00:05:09.670 }, 00:05:09.670 { 00:05:09.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.670 "dma_device_type": 2 00:05:09.670 } 00:05:09.670 ], 00:05:09.670 "driver_specific": {} 00:05:09.670 }, 00:05:09.670 { 00:05:09.670 "name": "Passthru0", 00:05:09.670 "aliases": [ 00:05:09.670 "2545aa0e-52ce-569e-afa5-00e7d1862dbb" 00:05:09.670 ], 00:05:09.670 "product_name": "passthru", 00:05:09.670 "block_size": 512, 00:05:09.670 "num_blocks": 16384, 00:05:09.670 "uuid": "2545aa0e-52ce-569e-afa5-00e7d1862dbb", 00:05:09.670 "assigned_rate_limits": { 00:05:09.670 "rw_ios_per_sec": 0, 00:05:09.670 "rw_mbytes_per_sec": 0, 00:05:09.670 "r_mbytes_per_sec": 0, 00:05:09.670 "w_mbytes_per_sec": 0 00:05:09.670 }, 00:05:09.670 "claimed": false, 00:05:09.670 "zoned": false, 00:05:09.670 "supported_io_types": { 00:05:09.670 "read": true, 00:05:09.670 "write": true, 00:05:09.670 "unmap": true, 00:05:09.671 "flush": true, 00:05:09.671 "reset": true, 00:05:09.671 "nvme_admin": false, 00:05:09.671 "nvme_io": false, 00:05:09.671 "nvme_io_md": false, 00:05:09.671 "write_zeroes": true, 00:05:09.671 "zcopy": true, 00:05:09.671 "get_zone_info": false, 00:05:09.671 "zone_management": false, 00:05:09.671 "zone_append": false, 00:05:09.671 "compare": false, 00:05:09.671 "compare_and_write": false, 00:05:09.671 "abort": true, 00:05:09.671 "seek_hole": false, 00:05:09.671 "seek_data": false, 00:05:09.671 "copy": true, 00:05:09.671 "nvme_iov_md": false 00:05:09.671 }, 00:05:09.671 "memory_domains": [ 00:05:09.671 { 00:05:09.671 "dma_device_id": "system", 00:05:09.671 "dma_device_type": 1 00:05:09.671 }, 00:05:09.671 { 00:05:09.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.671 "dma_device_type": 2 00:05:09.671 } 00:05:09.671 ], 00:05:09.671 "driver_specific": { 00:05:09.671 "passthru": { 00:05:09.671 "name": "Passthru0", 00:05:09.671 "base_bdev_name": "Malloc2" 00:05:09.671 } 00:05:09.671 } 00:05:09.671 } 00:05:09.671 ]' 00:05:09.671 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:09.929 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.929 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.929 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.929 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.929 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.929 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.930 00:05:09.930 real 0m0.343s 00:05:09.930 user 0m0.213s 00:05:09.930 sys 0m0.043s 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:09.930 17:55:26 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.930 ************************************ 00:05:09.930 END TEST rpc_daemon_integrity 00:05:09.930 ************************************ 00:05:09.930 17:55:26 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:09.930 17:55:26 rpc -- rpc/rpc.sh@84 -- # killprocess 57975 00:05:09.930 17:55:26 rpc -- common/autotest_common.sh@952 -- # '[' -z 57975 ']' 00:05:09.930 17:55:26 rpc -- common/autotest_common.sh@956 -- # kill -0 57975 00:05:09.930 17:55:26 rpc -- common/autotest_common.sh@957 -- # uname 00:05:09.930 17:55:26 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:09.930 17:55:26 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 57975 00:05:09.930 17:55:26 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:09.930 17:55:26 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:09.930 killing process with pid 57975 00:05:09.930 17:55:26 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 57975' 00:05:09.930 17:55:26 rpc -- common/autotest_common.sh@971 -- # kill 57975 00:05:09.930 17:55:26 rpc -- common/autotest_common.sh@976 -- # wait 57975 00:05:12.500 00:05:12.500 real 0m4.973s 00:05:12.500 user 0m5.816s 00:05:12.500 sys 0m0.746s 00:05:12.500 17:55:28 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:12.500 17:55:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.500 ************************************ 00:05:12.500 END TEST rpc 00:05:12.500 ************************************ 00:05:12.500 17:55:28 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:12.500 17:55:28 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:12.500 17:55:28 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.500 17:55:28 -- common/autotest_common.sh@10 -- # set +x 00:05:12.500 ************************************ 00:05:12.500 START TEST skip_rpc 00:05:12.500 ************************************ 00:05:12.500 17:55:28 skip_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:12.500 * Looking for test storage... 00:05:12.500 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:12.500 17:55:28 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:12.500 17:55:28 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:12.500 17:55:28 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:12.500 17:55:28 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:12.500 17:55:28 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:12.501 17:55:28 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:12.501 17:55:28 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:12.501 17:55:28 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:12.501 17:55:28 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:12.501 17:55:28 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:12.501 17:55:28 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:12.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.501 --rc genhtml_branch_coverage=1 00:05:12.501 --rc genhtml_function_coverage=1 00:05:12.501 --rc genhtml_legend=1 00:05:12.501 --rc geninfo_all_blocks=1 00:05:12.501 --rc geninfo_unexecuted_blocks=1 00:05:12.501 00:05:12.501 ' 00:05:12.501 17:55:28 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:12.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.501 --rc genhtml_branch_coverage=1 00:05:12.501 --rc genhtml_function_coverage=1 00:05:12.501 --rc genhtml_legend=1 00:05:12.501 --rc geninfo_all_blocks=1 00:05:12.501 --rc geninfo_unexecuted_blocks=1 00:05:12.501 00:05:12.501 ' 00:05:12.501 17:55:28 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:12.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.501 --rc genhtml_branch_coverage=1 00:05:12.501 --rc genhtml_function_coverage=1 00:05:12.501 --rc genhtml_legend=1 00:05:12.501 --rc geninfo_all_blocks=1 00:05:12.501 --rc geninfo_unexecuted_blocks=1 00:05:12.501 00:05:12.501 ' 00:05:12.501 17:55:28 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:12.501 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:12.501 --rc genhtml_branch_coverage=1 00:05:12.501 --rc genhtml_function_coverage=1 00:05:12.501 --rc genhtml_legend=1 00:05:12.501 --rc geninfo_all_blocks=1 00:05:12.501 --rc geninfo_unexecuted_blocks=1 00:05:12.501 00:05:12.501 ' 00:05:12.501 17:55:28 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:12.501 17:55:28 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:12.501 17:55:28 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:12.501 17:55:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:12.501 17:55:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:12.501 17:55:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.501 ************************************ 00:05:12.501 START TEST skip_rpc 00:05:12.501 ************************************ 00:05:12.501 17:55:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:05:12.501 17:55:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=58204 00:05:12.501 17:55:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:12.501 17:55:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:12.501 17:55:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:12.501 [2024-10-28 17:55:28.845655] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:12.501 [2024-10-28 17:55:28.845877] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58204 ] 00:05:12.758 [2024-10-28 17:55:29.031635] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.758 [2024-10-28 17:55:29.164119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 58204 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 58204 ']' 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 58204 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58204 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:18.018 killing process with pid 58204 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58204' 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 58204 00:05:18.018 17:55:33 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 58204 00:05:19.915 00:05:19.915 real 0m7.195s 00:05:19.915 user 0m6.760s 00:05:19.915 sys 0m0.323s 00:05:19.915 17:55:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:19.915 17:55:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.915 ************************************ 00:05:19.915 END TEST skip_rpc 00:05:19.915 ************************************ 00:05:19.915 17:55:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:19.915 17:55:35 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:19.915 17:55:35 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:19.915 17:55:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.915 ************************************ 00:05:19.915 START TEST skip_rpc_with_json 00:05:19.915 ************************************ 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=58308 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 58308 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 58308 ']' 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:19.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:19.915 17:55:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:19.915 [2024-10-28 17:55:36.097908] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:19.915 [2024-10-28 17:55:36.098081] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58308 ] 00:05:19.915 [2024-10-28 17:55:36.283994] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.173 [2024-10-28 17:55:36.408935] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.742 [2024-10-28 17:55:37.176132] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:20.742 request: 00:05:20.742 { 00:05:20.742 "trtype": "tcp", 00:05:20.742 "method": "nvmf_get_transports", 00:05:20.742 "req_id": 1 00:05:20.742 } 00:05:20.742 Got JSON-RPC error response 00:05:20.742 response: 00:05:20.742 { 00:05:20.742 "code": -19, 00:05:20.742 "message": "No such device" 00:05:20.742 } 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:20.742 [2024-10-28 17:55:37.188252] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:20.742 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:21.011 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:21.011 17:55:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:21.011 { 00:05:21.011 "subsystems": [ 00:05:21.011 { 00:05:21.011 "subsystem": "fsdev", 00:05:21.011 "config": [ 00:05:21.011 { 00:05:21.011 "method": "fsdev_set_opts", 00:05:21.011 "params": { 00:05:21.011 "fsdev_io_pool_size": 65535, 00:05:21.011 "fsdev_io_cache_size": 256 00:05:21.011 } 00:05:21.011 } 00:05:21.011 ] 00:05:21.011 }, 00:05:21.011 { 00:05:21.011 "subsystem": "keyring", 00:05:21.011 "config": [] 00:05:21.011 }, 00:05:21.011 { 00:05:21.011 "subsystem": "iobuf", 00:05:21.011 "config": [ 00:05:21.011 { 00:05:21.011 "method": "iobuf_set_options", 00:05:21.011 "params": { 00:05:21.011 "small_pool_count": 8192, 00:05:21.011 "large_pool_count": 1024, 00:05:21.011 "small_bufsize": 8192, 00:05:21.011 "large_bufsize": 135168, 00:05:21.011 "enable_numa": false 00:05:21.011 } 00:05:21.011 } 00:05:21.011 ] 00:05:21.011 }, 00:05:21.011 { 00:05:21.011 "subsystem": "sock", 00:05:21.011 "config": [ 00:05:21.011 { 00:05:21.011 "method": "sock_set_default_impl", 00:05:21.011 "params": { 00:05:21.011 "impl_name": "posix" 00:05:21.011 } 00:05:21.011 }, 00:05:21.011 { 00:05:21.011 "method": "sock_impl_set_options", 00:05:21.011 "params": { 00:05:21.011 "impl_name": "ssl", 00:05:21.011 "recv_buf_size": 4096, 00:05:21.011 "send_buf_size": 4096, 00:05:21.011 "enable_recv_pipe": true, 00:05:21.011 "enable_quickack": false, 00:05:21.011 "enable_placement_id": 0, 00:05:21.011 "enable_zerocopy_send_server": true, 00:05:21.011 "enable_zerocopy_send_client": false, 00:05:21.011 "zerocopy_threshold": 0, 00:05:21.011 "tls_version": 0, 00:05:21.012 "enable_ktls": false 00:05:21.012 } 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "method": "sock_impl_set_options", 00:05:21.012 "params": { 00:05:21.012 "impl_name": "posix", 00:05:21.012 "recv_buf_size": 2097152, 00:05:21.012 "send_buf_size": 2097152, 00:05:21.012 "enable_recv_pipe": true, 00:05:21.012 "enable_quickack": false, 00:05:21.012 "enable_placement_id": 0, 00:05:21.012 "enable_zerocopy_send_server": true, 00:05:21.012 "enable_zerocopy_send_client": false, 00:05:21.012 "zerocopy_threshold": 0, 00:05:21.012 "tls_version": 0, 00:05:21.012 "enable_ktls": false 00:05:21.012 } 00:05:21.012 } 00:05:21.012 ] 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "subsystem": "vmd", 00:05:21.012 "config": [] 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "subsystem": "accel", 00:05:21.012 "config": [ 00:05:21.012 { 00:05:21.012 "method": "accel_set_options", 00:05:21.012 "params": { 00:05:21.012 "small_cache_size": 128, 00:05:21.012 "large_cache_size": 16, 00:05:21.012 "task_count": 2048, 00:05:21.012 "sequence_count": 2048, 00:05:21.012 "buf_count": 2048 00:05:21.012 } 00:05:21.012 } 00:05:21.012 ] 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "subsystem": "bdev", 00:05:21.012 "config": [ 00:05:21.012 { 00:05:21.012 "method": "bdev_set_options", 00:05:21.012 "params": { 00:05:21.012 "bdev_io_pool_size": 65535, 00:05:21.012 "bdev_io_cache_size": 256, 00:05:21.012 "bdev_auto_examine": true, 00:05:21.012 "iobuf_small_cache_size": 128, 00:05:21.012 "iobuf_large_cache_size": 16 00:05:21.012 } 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "method": "bdev_raid_set_options", 00:05:21.012 "params": { 00:05:21.012 "process_window_size_kb": 1024, 00:05:21.012 "process_max_bandwidth_mb_sec": 0 00:05:21.012 } 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "method": "bdev_iscsi_set_options", 00:05:21.012 "params": { 00:05:21.012 "timeout_sec": 30 00:05:21.012 } 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "method": "bdev_nvme_set_options", 00:05:21.012 "params": { 00:05:21.012 "action_on_timeout": "none", 00:05:21.012 "timeout_us": 0, 00:05:21.012 "timeout_admin_us": 0, 00:05:21.012 "keep_alive_timeout_ms": 10000, 00:05:21.012 "arbitration_burst": 0, 00:05:21.012 "low_priority_weight": 0, 00:05:21.012 "medium_priority_weight": 0, 00:05:21.012 "high_priority_weight": 0, 00:05:21.012 "nvme_adminq_poll_period_us": 10000, 00:05:21.012 "nvme_ioq_poll_period_us": 0, 00:05:21.012 "io_queue_requests": 0, 00:05:21.012 "delay_cmd_submit": true, 00:05:21.012 "transport_retry_count": 4, 00:05:21.012 "bdev_retry_count": 3, 00:05:21.012 "transport_ack_timeout": 0, 00:05:21.012 "ctrlr_loss_timeout_sec": 0, 00:05:21.012 "reconnect_delay_sec": 0, 00:05:21.012 "fast_io_fail_timeout_sec": 0, 00:05:21.012 "disable_auto_failback": false, 00:05:21.012 "generate_uuids": false, 00:05:21.012 "transport_tos": 0, 00:05:21.012 "nvme_error_stat": false, 00:05:21.012 "rdma_srq_size": 0, 00:05:21.012 "io_path_stat": false, 00:05:21.012 "allow_accel_sequence": false, 00:05:21.012 "rdma_max_cq_size": 0, 00:05:21.012 "rdma_cm_event_timeout_ms": 0, 00:05:21.012 "dhchap_digests": [ 00:05:21.012 "sha256", 00:05:21.012 "sha384", 00:05:21.012 "sha512" 00:05:21.012 ], 00:05:21.012 "dhchap_dhgroups": [ 00:05:21.012 "null", 00:05:21.012 "ffdhe2048", 00:05:21.012 "ffdhe3072", 00:05:21.012 "ffdhe4096", 00:05:21.012 "ffdhe6144", 00:05:21.012 "ffdhe8192" 00:05:21.012 ] 00:05:21.012 } 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "method": "bdev_nvme_set_hotplug", 00:05:21.012 "params": { 00:05:21.012 "period_us": 100000, 00:05:21.012 "enable": false 00:05:21.012 } 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "method": "bdev_wait_for_examine" 00:05:21.012 } 00:05:21.012 ] 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "subsystem": "scsi", 00:05:21.012 "config": null 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "subsystem": "scheduler", 00:05:21.012 "config": [ 00:05:21.012 { 00:05:21.012 "method": "framework_set_scheduler", 00:05:21.012 "params": { 00:05:21.012 "name": "static" 00:05:21.012 } 00:05:21.012 } 00:05:21.012 ] 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "subsystem": "vhost_scsi", 00:05:21.012 "config": [] 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "subsystem": "vhost_blk", 00:05:21.012 "config": [] 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "subsystem": "ublk", 00:05:21.012 "config": [] 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "subsystem": "nbd", 00:05:21.012 "config": [] 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "subsystem": "nvmf", 00:05:21.012 "config": [ 00:05:21.012 { 00:05:21.012 "method": "nvmf_set_config", 00:05:21.012 "params": { 00:05:21.012 "discovery_filter": "match_any", 00:05:21.012 "admin_cmd_passthru": { 00:05:21.012 "identify_ctrlr": false 00:05:21.012 }, 00:05:21.012 "dhchap_digests": [ 00:05:21.012 "sha256", 00:05:21.012 "sha384", 00:05:21.012 "sha512" 00:05:21.012 ], 00:05:21.012 "dhchap_dhgroups": [ 00:05:21.012 "null", 00:05:21.012 "ffdhe2048", 00:05:21.012 "ffdhe3072", 00:05:21.012 "ffdhe4096", 00:05:21.012 "ffdhe6144", 00:05:21.012 "ffdhe8192" 00:05:21.012 ] 00:05:21.012 } 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "method": "nvmf_set_max_subsystems", 00:05:21.012 "params": { 00:05:21.012 "max_subsystems": 1024 00:05:21.012 } 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "method": "nvmf_set_crdt", 00:05:21.012 "params": { 00:05:21.012 "crdt1": 0, 00:05:21.012 "crdt2": 0, 00:05:21.012 "crdt3": 0 00:05:21.012 } 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "method": "nvmf_create_transport", 00:05:21.012 "params": { 00:05:21.012 "trtype": "TCP", 00:05:21.012 "max_queue_depth": 128, 00:05:21.012 "max_io_qpairs_per_ctrlr": 127, 00:05:21.012 "in_capsule_data_size": 4096, 00:05:21.012 "max_io_size": 131072, 00:05:21.012 "io_unit_size": 131072, 00:05:21.012 "max_aq_depth": 128, 00:05:21.012 "num_shared_buffers": 511, 00:05:21.012 "buf_cache_size": 4294967295, 00:05:21.012 "dif_insert_or_strip": false, 00:05:21.012 "zcopy": false, 00:05:21.012 "c2h_success": true, 00:05:21.012 "sock_priority": 0, 00:05:21.012 "abort_timeout_sec": 1, 00:05:21.012 "ack_timeout": 0, 00:05:21.012 "data_wr_pool_size": 0 00:05:21.012 } 00:05:21.012 } 00:05:21.012 ] 00:05:21.012 }, 00:05:21.012 { 00:05:21.012 "subsystem": "iscsi", 00:05:21.012 "config": [ 00:05:21.012 { 00:05:21.012 "method": "iscsi_set_options", 00:05:21.012 "params": { 00:05:21.012 "node_base": "iqn.2016-06.io.spdk", 00:05:21.012 "max_sessions": 128, 00:05:21.012 "max_connections_per_session": 2, 00:05:21.012 "max_queue_depth": 64, 00:05:21.012 "default_time2wait": 2, 00:05:21.012 "default_time2retain": 20, 00:05:21.012 "first_burst_length": 8192, 00:05:21.012 "immediate_data": true, 00:05:21.012 "allow_duplicated_isid": false, 00:05:21.012 "error_recovery_level": 0, 00:05:21.012 "nop_timeout": 60, 00:05:21.012 "nop_in_interval": 30, 00:05:21.012 "disable_chap": false, 00:05:21.012 "require_chap": false, 00:05:21.012 "mutual_chap": false, 00:05:21.012 "chap_group": 0, 00:05:21.012 "max_large_datain_per_connection": 64, 00:05:21.012 "max_r2t_per_connection": 4, 00:05:21.012 "pdu_pool_size": 36864, 00:05:21.012 "immediate_data_pool_size": 16384, 00:05:21.012 "data_out_pool_size": 2048 00:05:21.012 } 00:05:21.012 } 00:05:21.012 ] 00:05:21.012 } 00:05:21.012 ] 00:05:21.012 } 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 58308 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58308 ']' 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58308 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58308 00:05:21.012 killing process with pid 58308 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58308' 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58308 00:05:21.012 17:55:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58308 00:05:23.541 17:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=58353 00:05:23.541 17:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:23.541 17:55:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:28.807 17:55:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 58353 00:05:28.807 17:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 58353 ']' 00:05:28.807 17:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 58353 00:05:28.807 17:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:05:28.807 17:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:28.807 17:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58353 00:05:28.807 killing process with pid 58353 00:05:28.807 17:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:28.807 17:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:28.807 17:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58353' 00:05:28.807 17:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 58353 00:05:28.807 17:55:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 58353 00:05:30.182 17:55:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.182 17:55:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:30.182 ************************************ 00:05:30.182 END TEST skip_rpc_with_json 00:05:30.182 ************************************ 00:05:30.182 00:05:30.182 real 0m10.595s 00:05:30.182 user 0m10.319s 00:05:30.182 sys 0m0.729s 00:05:30.182 17:55:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.182 17:55:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:30.182 17:55:46 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:30.182 17:55:46 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:30.182 17:55:46 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.182 17:55:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.182 ************************************ 00:05:30.182 START TEST skip_rpc_with_delay 00:05:30.182 ************************************ 00:05:30.182 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:05:30.182 17:55:46 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.182 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:30.182 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.182 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.183 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.183 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.183 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.183 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.183 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.183 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:30.183 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:30.183 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:30.441 [2024-10-28 17:55:46.746150] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:30.441 ************************************ 00:05:30.441 END TEST skip_rpc_with_delay 00:05:30.441 ************************************ 00:05:30.441 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:30.441 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:30.441 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:30.441 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:30.441 00:05:30.441 real 0m0.194s 00:05:30.441 user 0m0.111s 00:05:30.441 sys 0m0.081s 00:05:30.441 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:30.441 17:55:46 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:30.441 17:55:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:30.441 17:55:46 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:30.441 17:55:46 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:30.441 17:55:46 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:30.441 17:55:46 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:30.441 17:55:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.441 ************************************ 00:05:30.441 START TEST exit_on_failed_rpc_init 00:05:30.441 ************************************ 00:05:30.441 17:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:05:30.441 17:55:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=58485 00:05:30.441 17:55:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 58485 00:05:30.441 17:55:46 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:30.441 17:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 58485 ']' 00:05:30.441 17:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.441 17:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:30.441 17:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.441 17:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:30.441 17:55:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:30.699 [2024-10-28 17:55:46.987071] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:30.699 [2024-10-28 17:55:46.987249] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58485 ] 00:05:30.699 [2024-10-28 17:55:47.174729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.958 [2024-10-28 17:55:47.328368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:31.900 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:31.900 [2024-10-28 17:55:48.298194] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:31.900 [2024-10-28 17:55:48.298862] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58510 ] 00:05:32.158 [2024-10-28 17:55:48.484754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.158 [2024-10-28 17:55:48.591933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:32.158 [2024-10-28 17:55:48.592052] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:32.158 [2024-10-28 17:55:48.592075] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:32.158 [2024-10-28 17:55:48.592097] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 58485 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 58485 ']' 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 58485 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58485 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:32.417 killing process with pid 58485 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58485' 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 58485 00:05:32.417 17:55:48 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 58485 00:05:34.946 00:05:34.946 real 0m4.075s 00:05:34.946 user 0m4.628s 00:05:34.946 sys 0m0.535s 00:05:34.946 17:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.946 17:55:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.946 ************************************ 00:05:34.946 END TEST exit_on_failed_rpc_init 00:05:34.946 ************************************ 00:05:34.946 17:55:50 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:34.946 00:05:34.946 real 0m22.452s 00:05:34.946 user 0m22.001s 00:05:34.946 sys 0m1.869s 00:05:34.946 17:55:50 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.946 ************************************ 00:05:34.946 END TEST skip_rpc 00:05:34.946 17:55:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.946 ************************************ 00:05:34.946 17:55:51 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.946 17:55:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.946 17:55:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.946 17:55:51 -- common/autotest_common.sh@10 -- # set +x 00:05:34.946 ************************************ 00:05:34.946 START TEST rpc_client 00:05:34.946 ************************************ 00:05:34.946 17:55:51 rpc_client -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:34.946 * Looking for test storage... 00:05:34.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:34.946 17:55:51 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:34.946 17:55:51 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:05:34.946 17:55:51 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:34.946 17:55:51 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:34.946 17:55:51 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:34.946 17:55:51 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:34.946 17:55:51 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.946 --rc genhtml_branch_coverage=1 00:05:34.946 --rc genhtml_function_coverage=1 00:05:34.946 --rc genhtml_legend=1 00:05:34.946 --rc geninfo_all_blocks=1 00:05:34.946 --rc geninfo_unexecuted_blocks=1 00:05:34.946 00:05:34.946 ' 00:05:34.946 17:55:51 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.946 --rc genhtml_branch_coverage=1 00:05:34.946 --rc genhtml_function_coverage=1 00:05:34.946 --rc genhtml_legend=1 00:05:34.946 --rc geninfo_all_blocks=1 00:05:34.946 --rc geninfo_unexecuted_blocks=1 00:05:34.946 00:05:34.946 ' 00:05:34.946 17:55:51 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.946 --rc genhtml_branch_coverage=1 00:05:34.946 --rc genhtml_function_coverage=1 00:05:34.946 --rc genhtml_legend=1 00:05:34.946 --rc geninfo_all_blocks=1 00:05:34.946 --rc geninfo_unexecuted_blocks=1 00:05:34.946 00:05:34.946 ' 00:05:34.946 17:55:51 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:34.946 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:34.946 --rc genhtml_branch_coverage=1 00:05:34.946 --rc genhtml_function_coverage=1 00:05:34.947 --rc genhtml_legend=1 00:05:34.947 --rc geninfo_all_blocks=1 00:05:34.947 --rc geninfo_unexecuted_blocks=1 00:05:34.947 00:05:34.947 ' 00:05:34.947 17:55:51 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:34.947 OK 00:05:34.947 17:55:51 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:34.947 00:05:34.947 real 0m0.265s 00:05:34.947 user 0m0.159s 00:05:34.947 sys 0m0.114s 00:05:34.947 17:55:51 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:34.947 17:55:51 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 ************************************ 00:05:34.947 END TEST rpc_client 00:05:34.947 ************************************ 00:05:34.947 17:55:51 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.947 17:55:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:34.947 17:55:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:34.947 17:55:51 -- common/autotest_common.sh@10 -- # set +x 00:05:34.947 ************************************ 00:05:34.947 START TEST json_config 00:05:34.947 ************************************ 00:05:34.947 17:55:51 json_config -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:34.947 17:55:51 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:34.947 17:55:51 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:05:34.947 17:55:51 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.205 17:55:51 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.205 17:55:51 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.205 17:55:51 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.205 17:55:51 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.205 17:55:51 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.205 17:55:51 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.205 17:55:51 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.205 17:55:51 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.205 17:55:51 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.205 17:55:51 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.205 17:55:51 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.205 17:55:51 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.205 17:55:51 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:35.205 17:55:51 json_config -- scripts/common.sh@345 -- # : 1 00:05:35.205 17:55:51 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.205 17:55:51 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.205 17:55:51 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:35.205 17:55:51 json_config -- scripts/common.sh@353 -- # local d=1 00:05:35.205 17:55:51 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.205 17:55:51 json_config -- scripts/common.sh@355 -- # echo 1 00:05:35.205 17:55:51 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.205 17:55:51 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:35.205 17:55:51 json_config -- scripts/common.sh@353 -- # local d=2 00:05:35.205 17:55:51 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.205 17:55:51 json_config -- scripts/common.sh@355 -- # echo 2 00:05:35.205 17:55:51 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.205 17:55:51 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.205 17:55:51 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.205 17:55:51 json_config -- scripts/common.sh@368 -- # return 0 00:05:35.205 17:55:51 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.205 17:55:51 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.205 --rc genhtml_branch_coverage=1 00:05:35.205 --rc genhtml_function_coverage=1 00:05:35.205 --rc genhtml_legend=1 00:05:35.205 --rc geninfo_all_blocks=1 00:05:35.205 --rc geninfo_unexecuted_blocks=1 00:05:35.205 00:05:35.205 ' 00:05:35.205 17:55:51 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.205 --rc genhtml_branch_coverage=1 00:05:35.205 --rc genhtml_function_coverage=1 00:05:35.205 --rc genhtml_legend=1 00:05:35.205 --rc geninfo_all_blocks=1 00:05:35.205 --rc geninfo_unexecuted_blocks=1 00:05:35.205 00:05:35.205 ' 00:05:35.205 17:55:51 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.205 --rc genhtml_branch_coverage=1 00:05:35.205 --rc genhtml_function_coverage=1 00:05:35.205 --rc genhtml_legend=1 00:05:35.205 --rc geninfo_all_blocks=1 00:05:35.205 --rc geninfo_unexecuted_blocks=1 00:05:35.205 00:05:35.205 ' 00:05:35.205 17:55:51 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.205 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.205 --rc genhtml_branch_coverage=1 00:05:35.205 --rc genhtml_function_coverage=1 00:05:35.205 --rc genhtml_legend=1 00:05:35.205 --rc geninfo_all_blocks=1 00:05:35.205 --rc geninfo_unexecuted_blocks=1 00:05:35.205 00:05:35.205 ' 00:05:35.205 17:55:51 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae374150-be72-4028-b88b-bc3663361fee 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=ae374150-be72-4028-b88b-bc3663361fee 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.205 17:55:51 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.205 17:55:51 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.205 17:55:51 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.205 17:55:51 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.205 17:55:51 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.205 17:55:51 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.205 17:55:51 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.205 17:55:51 json_config -- paths/export.sh@5 -- # export PATH 00:05:35.205 17:55:51 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@51 -- # : 0 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.205 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.205 17:55:51 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.205 17:55:51 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.205 17:55:51 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:35.205 17:55:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:35.205 17:55:51 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:35.205 17:55:51 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:35.205 WARNING: No tests are enabled so not running JSON configuration tests 00:05:35.205 17:55:51 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:35.205 17:55:51 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:35.205 00:05:35.205 real 0m0.176s 00:05:35.205 user 0m0.126s 00:05:35.205 sys 0m0.054s 00:05:35.205 17:55:51 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:35.205 17:55:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:35.205 ************************************ 00:05:35.205 END TEST json_config 00:05:35.205 ************************************ 00:05:35.205 17:55:51 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.205 17:55:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:35.205 17:55:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:35.205 17:55:51 -- common/autotest_common.sh@10 -- # set +x 00:05:35.205 ************************************ 00:05:35.205 START TEST json_config_extra_key 00:05:35.205 ************************************ 00:05:35.205 17:55:51 json_config_extra_key -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:35.205 17:55:51 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:35.205 17:55:51 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:05:35.205 17:55:51 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:35.464 17:55:51 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.464 17:55:51 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:35.464 17:55:51 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.464 17:55:51 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:35.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.464 --rc genhtml_branch_coverage=1 00:05:35.464 --rc genhtml_function_coverage=1 00:05:35.464 --rc genhtml_legend=1 00:05:35.464 --rc geninfo_all_blocks=1 00:05:35.464 --rc geninfo_unexecuted_blocks=1 00:05:35.464 00:05:35.464 ' 00:05:35.464 17:55:51 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:35.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.464 --rc genhtml_branch_coverage=1 00:05:35.464 --rc genhtml_function_coverage=1 00:05:35.464 --rc genhtml_legend=1 00:05:35.464 --rc geninfo_all_blocks=1 00:05:35.464 --rc geninfo_unexecuted_blocks=1 00:05:35.464 00:05:35.465 ' 00:05:35.465 17:55:51 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:35.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.465 --rc genhtml_branch_coverage=1 00:05:35.465 --rc genhtml_function_coverage=1 00:05:35.465 --rc genhtml_legend=1 00:05:35.465 --rc geninfo_all_blocks=1 00:05:35.465 --rc geninfo_unexecuted_blocks=1 00:05:35.465 00:05:35.465 ' 00:05:35.465 17:55:51 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:35.465 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.465 --rc genhtml_branch_coverage=1 00:05:35.465 --rc genhtml_function_coverage=1 00:05:35.465 --rc genhtml_legend=1 00:05:35.465 --rc geninfo_all_blocks=1 00:05:35.465 --rc geninfo_unexecuted_blocks=1 00:05:35.465 00:05:35.465 ' 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:ae374150-be72-4028-b88b-bc3663361fee 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=ae374150-be72-4028-b88b-bc3663361fee 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:35.465 17:55:51 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:35.465 17:55:51 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:35.465 17:55:51 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:35.465 17:55:51 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:35.465 17:55:51 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.465 17:55:51 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.465 17:55:51 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.465 17:55:51 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:35.465 17:55:51 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:35.465 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:35.465 17:55:51 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:35.465 INFO: launching applications... 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:35.465 17:55:51 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.465 17:55:51 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:35.465 17:55:51 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:35.465 17:55:51 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:35.465 17:55:51 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:35.465 17:55:51 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:35.465 17:55:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.465 17:55:51 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:35.465 17:55:51 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58709 00:05:35.465 17:55:51 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:35.465 Waiting for target to run... 00:05:35.465 17:55:51 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:35.465 17:55:51 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58709 /var/tmp/spdk_tgt.sock 00:05:35.465 17:55:51 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 58709 ']' 00:05:35.465 17:55:51 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:35.465 17:55:51 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:35.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:35.465 17:55:51 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:35.465 17:55:51 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:35.465 17:55:51 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:35.465 [2024-10-28 17:55:51.871118] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:35.465 [2024-10-28 17:55:51.871314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58709 ] 00:05:36.032 [2024-10-28 17:55:52.229239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.032 [2024-10-28 17:55:52.324054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.597 17:55:52 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:36.597 17:55:52 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:05:36.597 00:05:36.597 17:55:52 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:36.597 INFO: shutting down applications... 00:05:36.597 17:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:36.597 17:55:52 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:36.597 17:55:52 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:36.597 17:55:52 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:36.597 17:55:52 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58709 ]] 00:05:36.597 17:55:52 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58709 00:05:36.597 17:55:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:36.597 17:55:52 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:36.597 17:55:52 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58709 00:05:36.597 17:55:52 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.163 17:55:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.163 17:55:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.163 17:55:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58709 00:05:37.163 17:55:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:37.729 17:55:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:37.729 17:55:53 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:37.729 17:55:53 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58709 00:05:37.729 17:55:53 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.295 17:55:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.295 17:55:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.295 17:55:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58709 00:05:38.295 17:55:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:38.553 17:55:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:38.553 17:55:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:38.553 17:55:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58709 00:05:38.553 17:55:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:39.121 17:55:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:39.121 17:55:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:39.121 17:55:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58709 00:05:39.121 17:55:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:39.121 17:55:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:39.121 17:55:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:39.121 SPDK target shutdown done 00:05:39.121 17:55:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:39.121 Success 00:05:39.121 17:55:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:39.121 00:05:39.121 real 0m3.943s 00:05:39.121 user 0m3.742s 00:05:39.121 sys 0m0.500s 00:05:39.121 17:55:55 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:39.121 17:55:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:39.121 ************************************ 00:05:39.121 END TEST json_config_extra_key 00:05:39.121 ************************************ 00:05:39.121 17:55:55 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.121 17:55:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:39.121 17:55:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:39.121 17:55:55 -- common/autotest_common.sh@10 -- # set +x 00:05:39.121 ************************************ 00:05:39.121 START TEST alias_rpc 00:05:39.121 ************************************ 00:05:39.121 17:55:55 alias_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:39.380 * Looking for test storage... 00:05:39.380 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:39.380 17:55:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:39.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.380 --rc genhtml_branch_coverage=1 00:05:39.380 --rc genhtml_function_coverage=1 00:05:39.380 --rc genhtml_legend=1 00:05:39.380 --rc geninfo_all_blocks=1 00:05:39.380 --rc geninfo_unexecuted_blocks=1 00:05:39.380 00:05:39.380 ' 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:39.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.380 --rc genhtml_branch_coverage=1 00:05:39.380 --rc genhtml_function_coverage=1 00:05:39.380 --rc genhtml_legend=1 00:05:39.380 --rc geninfo_all_blocks=1 00:05:39.380 --rc geninfo_unexecuted_blocks=1 00:05:39.380 00:05:39.380 ' 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:39.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.380 --rc genhtml_branch_coverage=1 00:05:39.380 --rc genhtml_function_coverage=1 00:05:39.380 --rc genhtml_legend=1 00:05:39.380 --rc geninfo_all_blocks=1 00:05:39.380 --rc geninfo_unexecuted_blocks=1 00:05:39.380 00:05:39.380 ' 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:39.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:39.380 --rc genhtml_branch_coverage=1 00:05:39.380 --rc genhtml_function_coverage=1 00:05:39.380 --rc genhtml_legend=1 00:05:39.380 --rc geninfo_all_blocks=1 00:05:39.380 --rc geninfo_unexecuted_blocks=1 00:05:39.380 00:05:39.380 ' 00:05:39.380 17:55:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:39.380 17:55:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58814 00:05:39.380 17:55:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58814 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 58814 ']' 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.380 17:55:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:39.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:39.380 17:55:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.640 [2024-10-28 17:55:55.859831] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:39.640 [2024-10-28 17:55:55.860002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58814 ] 00:05:39.640 [2024-10-28 17:55:56.037189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.899 [2024-10-28 17:55:56.165909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.836 17:55:56 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:40.836 17:55:56 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:05:40.836 17:55:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:40.836 17:55:57 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58814 00:05:40.836 17:55:57 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 58814 ']' 00:05:40.836 17:55:57 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 58814 00:05:40.836 17:55:57 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:05:40.836 17:55:57 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:40.836 17:55:57 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58814 00:05:40.836 17:55:57 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:40.836 17:55:57 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:40.836 killing process with pid 58814 00:05:40.836 17:55:57 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58814' 00:05:40.836 17:55:57 alias_rpc -- common/autotest_common.sh@971 -- # kill 58814 00:05:40.836 17:55:57 alias_rpc -- common/autotest_common.sh@976 -- # wait 58814 00:05:43.364 00:05:43.364 real 0m3.799s 00:05:43.364 user 0m4.029s 00:05:43.364 sys 0m0.479s 00:05:43.364 17:55:59 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:43.364 17:55:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:43.364 ************************************ 00:05:43.364 END TEST alias_rpc 00:05:43.364 ************************************ 00:05:43.364 17:55:59 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:43.364 17:55:59 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:43.364 17:55:59 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:43.364 17:55:59 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:43.364 17:55:59 -- common/autotest_common.sh@10 -- # set +x 00:05:43.364 ************************************ 00:05:43.364 START TEST spdkcli_tcp 00:05:43.364 ************************************ 00:05:43.364 17:55:59 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:43.364 * Looking for test storage... 00:05:43.364 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:43.364 17:55:59 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:43.364 17:55:59 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:05:43.364 17:55:59 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:43.364 17:55:59 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.364 17:55:59 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:43.364 17:55:59 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.364 17:55:59 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:43.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.364 --rc genhtml_branch_coverage=1 00:05:43.364 --rc genhtml_function_coverage=1 00:05:43.364 --rc genhtml_legend=1 00:05:43.364 --rc geninfo_all_blocks=1 00:05:43.364 --rc geninfo_unexecuted_blocks=1 00:05:43.364 00:05:43.364 ' 00:05:43.364 17:55:59 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:43.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.364 --rc genhtml_branch_coverage=1 00:05:43.364 --rc genhtml_function_coverage=1 00:05:43.364 --rc genhtml_legend=1 00:05:43.364 --rc geninfo_all_blocks=1 00:05:43.364 --rc geninfo_unexecuted_blocks=1 00:05:43.364 00:05:43.364 ' 00:05:43.364 17:55:59 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:43.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.364 --rc genhtml_branch_coverage=1 00:05:43.364 --rc genhtml_function_coverage=1 00:05:43.364 --rc genhtml_legend=1 00:05:43.364 --rc geninfo_all_blocks=1 00:05:43.364 --rc geninfo_unexecuted_blocks=1 00:05:43.364 00:05:43.364 ' 00:05:43.364 17:55:59 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:43.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.364 --rc genhtml_branch_coverage=1 00:05:43.364 --rc genhtml_function_coverage=1 00:05:43.364 --rc genhtml_legend=1 00:05:43.364 --rc geninfo_all_blocks=1 00:05:43.364 --rc geninfo_unexecuted_blocks=1 00:05:43.364 00:05:43.364 ' 00:05:43.364 17:55:59 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:43.364 17:55:59 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:43.364 17:55:59 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:43.364 17:55:59 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:43.364 17:55:59 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:43.364 17:55:59 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:43.364 17:55:59 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:43.365 17:55:59 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:43.365 17:55:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.365 17:55:59 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58921 00:05:43.365 17:55:59 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58921 00:05:43.365 17:55:59 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:43.365 17:55:59 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 58921 ']' 00:05:43.365 17:55:59 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.365 17:55:59 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:43.365 17:55:59 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.365 17:55:59 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:43.365 17:55:59 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.365 [2024-10-28 17:55:59.697019] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:43.365 [2024-10-28 17:55:59.697209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58921 ] 00:05:43.623 [2024-10-28 17:55:59.883758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.623 [2024-10-28 17:56:00.010796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.623 [2024-10-28 17:56:00.010796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.557 17:56:00 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:44.557 17:56:00 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:05:44.557 17:56:00 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58938 00:05:44.557 17:56:00 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:44.557 17:56:00 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:44.816 [ 00:05:44.816 "bdev_malloc_delete", 00:05:44.816 "bdev_malloc_create", 00:05:44.816 "bdev_null_resize", 00:05:44.816 "bdev_null_delete", 00:05:44.816 "bdev_null_create", 00:05:44.816 "bdev_nvme_cuse_unregister", 00:05:44.816 "bdev_nvme_cuse_register", 00:05:44.816 "bdev_opal_new_user", 00:05:44.816 "bdev_opal_set_lock_state", 00:05:44.816 "bdev_opal_delete", 00:05:44.816 "bdev_opal_get_info", 00:05:44.816 "bdev_opal_create", 00:05:44.816 "bdev_nvme_opal_revert", 00:05:44.816 "bdev_nvme_opal_init", 00:05:44.816 "bdev_nvme_send_cmd", 00:05:44.816 "bdev_nvme_set_keys", 00:05:44.816 "bdev_nvme_get_path_iostat", 00:05:44.816 "bdev_nvme_get_mdns_discovery_info", 00:05:44.816 "bdev_nvme_stop_mdns_discovery", 00:05:44.816 "bdev_nvme_start_mdns_discovery", 00:05:44.816 "bdev_nvme_set_multipath_policy", 00:05:44.816 "bdev_nvme_set_preferred_path", 00:05:44.816 "bdev_nvme_get_io_paths", 00:05:44.816 "bdev_nvme_remove_error_injection", 00:05:44.816 "bdev_nvme_add_error_injection", 00:05:44.816 "bdev_nvme_get_discovery_info", 00:05:44.816 "bdev_nvme_stop_discovery", 00:05:44.816 "bdev_nvme_start_discovery", 00:05:44.816 "bdev_nvme_get_controller_health_info", 00:05:44.816 "bdev_nvme_disable_controller", 00:05:44.816 "bdev_nvme_enable_controller", 00:05:44.816 "bdev_nvme_reset_controller", 00:05:44.816 "bdev_nvme_get_transport_statistics", 00:05:44.816 "bdev_nvme_apply_firmware", 00:05:44.816 "bdev_nvme_detach_controller", 00:05:44.816 "bdev_nvme_get_controllers", 00:05:44.816 "bdev_nvme_attach_controller", 00:05:44.816 "bdev_nvme_set_hotplug", 00:05:44.816 "bdev_nvme_set_options", 00:05:44.816 "bdev_passthru_delete", 00:05:44.816 "bdev_passthru_create", 00:05:44.816 "bdev_lvol_set_parent_bdev", 00:05:44.816 "bdev_lvol_set_parent", 00:05:44.816 "bdev_lvol_check_shallow_copy", 00:05:44.816 "bdev_lvol_start_shallow_copy", 00:05:44.816 "bdev_lvol_grow_lvstore", 00:05:44.816 "bdev_lvol_get_lvols", 00:05:44.816 "bdev_lvol_get_lvstores", 00:05:44.816 "bdev_lvol_delete", 00:05:44.816 "bdev_lvol_set_read_only", 00:05:44.816 "bdev_lvol_resize", 00:05:44.816 "bdev_lvol_decouple_parent", 00:05:44.816 "bdev_lvol_inflate", 00:05:44.816 "bdev_lvol_rename", 00:05:44.816 "bdev_lvol_clone_bdev", 00:05:44.816 "bdev_lvol_clone", 00:05:44.816 "bdev_lvol_snapshot", 00:05:44.816 "bdev_lvol_create", 00:05:44.816 "bdev_lvol_delete_lvstore", 00:05:44.816 "bdev_lvol_rename_lvstore", 00:05:44.816 "bdev_lvol_create_lvstore", 00:05:44.816 "bdev_raid_set_options", 00:05:44.816 "bdev_raid_remove_base_bdev", 00:05:44.816 "bdev_raid_add_base_bdev", 00:05:44.816 "bdev_raid_delete", 00:05:44.816 "bdev_raid_create", 00:05:44.816 "bdev_raid_get_bdevs", 00:05:44.816 "bdev_error_inject_error", 00:05:44.816 "bdev_error_delete", 00:05:44.816 "bdev_error_create", 00:05:44.816 "bdev_split_delete", 00:05:44.816 "bdev_split_create", 00:05:44.816 "bdev_delay_delete", 00:05:44.816 "bdev_delay_create", 00:05:44.816 "bdev_delay_update_latency", 00:05:44.816 "bdev_zone_block_delete", 00:05:44.816 "bdev_zone_block_create", 00:05:44.816 "blobfs_create", 00:05:44.816 "blobfs_detect", 00:05:44.816 "blobfs_set_cache_size", 00:05:44.816 "bdev_xnvme_delete", 00:05:44.816 "bdev_xnvme_create", 00:05:44.816 "bdev_aio_delete", 00:05:44.816 "bdev_aio_rescan", 00:05:44.816 "bdev_aio_create", 00:05:44.816 "bdev_ftl_set_property", 00:05:44.816 "bdev_ftl_get_properties", 00:05:44.816 "bdev_ftl_get_stats", 00:05:44.816 "bdev_ftl_unmap", 00:05:44.816 "bdev_ftl_unload", 00:05:44.816 "bdev_ftl_delete", 00:05:44.816 "bdev_ftl_load", 00:05:44.816 "bdev_ftl_create", 00:05:44.816 "bdev_virtio_attach_controller", 00:05:44.816 "bdev_virtio_scsi_get_devices", 00:05:44.816 "bdev_virtio_detach_controller", 00:05:44.816 "bdev_virtio_blk_set_hotplug", 00:05:44.816 "bdev_iscsi_delete", 00:05:44.816 "bdev_iscsi_create", 00:05:44.816 "bdev_iscsi_set_options", 00:05:44.816 "accel_error_inject_error", 00:05:44.816 "ioat_scan_accel_module", 00:05:44.816 "dsa_scan_accel_module", 00:05:44.816 "iaa_scan_accel_module", 00:05:44.816 "keyring_file_remove_key", 00:05:44.816 "keyring_file_add_key", 00:05:44.816 "keyring_linux_set_options", 00:05:44.816 "fsdev_aio_delete", 00:05:44.816 "fsdev_aio_create", 00:05:44.816 "iscsi_get_histogram", 00:05:44.816 "iscsi_enable_histogram", 00:05:44.816 "iscsi_set_options", 00:05:44.816 "iscsi_get_auth_groups", 00:05:44.816 "iscsi_auth_group_remove_secret", 00:05:44.816 "iscsi_auth_group_add_secret", 00:05:44.816 "iscsi_delete_auth_group", 00:05:44.816 "iscsi_create_auth_group", 00:05:44.816 "iscsi_set_discovery_auth", 00:05:44.816 "iscsi_get_options", 00:05:44.816 "iscsi_target_node_request_logout", 00:05:44.817 "iscsi_target_node_set_redirect", 00:05:44.817 "iscsi_target_node_set_auth", 00:05:44.817 "iscsi_target_node_add_lun", 00:05:44.817 "iscsi_get_stats", 00:05:44.817 "iscsi_get_connections", 00:05:44.817 "iscsi_portal_group_set_auth", 00:05:44.817 "iscsi_start_portal_group", 00:05:44.817 "iscsi_delete_portal_group", 00:05:44.817 "iscsi_create_portal_group", 00:05:44.817 "iscsi_get_portal_groups", 00:05:44.817 "iscsi_delete_target_node", 00:05:44.817 "iscsi_target_node_remove_pg_ig_maps", 00:05:44.817 "iscsi_target_node_add_pg_ig_maps", 00:05:44.817 "iscsi_create_target_node", 00:05:44.817 "iscsi_get_target_nodes", 00:05:44.817 "iscsi_delete_initiator_group", 00:05:44.817 "iscsi_initiator_group_remove_initiators", 00:05:44.817 "iscsi_initiator_group_add_initiators", 00:05:44.817 "iscsi_create_initiator_group", 00:05:44.817 "iscsi_get_initiator_groups", 00:05:44.817 "nvmf_set_crdt", 00:05:44.817 "nvmf_set_config", 00:05:44.817 "nvmf_set_max_subsystems", 00:05:44.817 "nvmf_stop_mdns_prr", 00:05:44.817 "nvmf_publish_mdns_prr", 00:05:44.817 "nvmf_subsystem_get_listeners", 00:05:44.817 "nvmf_subsystem_get_qpairs", 00:05:44.817 "nvmf_subsystem_get_controllers", 00:05:44.817 "nvmf_get_stats", 00:05:44.817 "nvmf_get_transports", 00:05:44.817 "nvmf_create_transport", 00:05:44.817 "nvmf_get_targets", 00:05:44.817 "nvmf_delete_target", 00:05:44.817 "nvmf_create_target", 00:05:44.817 "nvmf_subsystem_allow_any_host", 00:05:44.817 "nvmf_subsystem_set_keys", 00:05:44.817 "nvmf_subsystem_remove_host", 00:05:44.817 "nvmf_subsystem_add_host", 00:05:44.817 "nvmf_ns_remove_host", 00:05:44.817 "nvmf_ns_add_host", 00:05:44.817 "nvmf_subsystem_remove_ns", 00:05:44.817 "nvmf_subsystem_set_ns_ana_group", 00:05:44.817 "nvmf_subsystem_add_ns", 00:05:44.817 "nvmf_subsystem_listener_set_ana_state", 00:05:44.817 "nvmf_discovery_get_referrals", 00:05:44.817 "nvmf_discovery_remove_referral", 00:05:44.817 "nvmf_discovery_add_referral", 00:05:44.817 "nvmf_subsystem_remove_listener", 00:05:44.817 "nvmf_subsystem_add_listener", 00:05:44.817 "nvmf_delete_subsystem", 00:05:44.817 "nvmf_create_subsystem", 00:05:44.817 "nvmf_get_subsystems", 00:05:44.817 "env_dpdk_get_mem_stats", 00:05:44.817 "nbd_get_disks", 00:05:44.817 "nbd_stop_disk", 00:05:44.817 "nbd_start_disk", 00:05:44.817 "ublk_recover_disk", 00:05:44.817 "ublk_get_disks", 00:05:44.817 "ublk_stop_disk", 00:05:44.817 "ublk_start_disk", 00:05:44.817 "ublk_destroy_target", 00:05:44.817 "ublk_create_target", 00:05:44.817 "virtio_blk_create_transport", 00:05:44.817 "virtio_blk_get_transports", 00:05:44.817 "vhost_controller_set_coalescing", 00:05:44.817 "vhost_get_controllers", 00:05:44.817 "vhost_delete_controller", 00:05:44.817 "vhost_create_blk_controller", 00:05:44.817 "vhost_scsi_controller_remove_target", 00:05:44.817 "vhost_scsi_controller_add_target", 00:05:44.817 "vhost_start_scsi_controller", 00:05:44.817 "vhost_create_scsi_controller", 00:05:44.817 "thread_set_cpumask", 00:05:44.817 "scheduler_set_options", 00:05:44.817 "framework_get_governor", 00:05:44.817 "framework_get_scheduler", 00:05:44.817 "framework_set_scheduler", 00:05:44.817 "framework_get_reactors", 00:05:44.817 "thread_get_io_channels", 00:05:44.817 "thread_get_pollers", 00:05:44.817 "thread_get_stats", 00:05:44.817 "framework_monitor_context_switch", 00:05:44.817 "spdk_kill_instance", 00:05:44.817 "log_enable_timestamps", 00:05:44.817 "log_get_flags", 00:05:44.817 "log_clear_flag", 00:05:44.817 "log_set_flag", 00:05:44.817 "log_get_level", 00:05:44.817 "log_set_level", 00:05:44.817 "log_get_print_level", 00:05:44.817 "log_set_print_level", 00:05:44.817 "framework_enable_cpumask_locks", 00:05:44.817 "framework_disable_cpumask_locks", 00:05:44.817 "framework_wait_init", 00:05:44.817 "framework_start_init", 00:05:44.817 "scsi_get_devices", 00:05:44.817 "bdev_get_histogram", 00:05:44.817 "bdev_enable_histogram", 00:05:44.817 "bdev_set_qos_limit", 00:05:44.817 "bdev_set_qd_sampling_period", 00:05:44.817 "bdev_get_bdevs", 00:05:44.817 "bdev_reset_iostat", 00:05:44.817 "bdev_get_iostat", 00:05:44.817 "bdev_examine", 00:05:44.817 "bdev_wait_for_examine", 00:05:44.817 "bdev_set_options", 00:05:44.817 "accel_get_stats", 00:05:44.817 "accel_set_options", 00:05:44.817 "accel_set_driver", 00:05:44.817 "accel_crypto_key_destroy", 00:05:44.817 "accel_crypto_keys_get", 00:05:44.817 "accel_crypto_key_create", 00:05:44.817 "accel_assign_opc", 00:05:44.817 "accel_get_module_info", 00:05:44.817 "accel_get_opc_assignments", 00:05:44.817 "vmd_rescan", 00:05:44.817 "vmd_remove_device", 00:05:44.817 "vmd_enable", 00:05:44.817 "sock_get_default_impl", 00:05:44.817 "sock_set_default_impl", 00:05:44.817 "sock_impl_set_options", 00:05:44.817 "sock_impl_get_options", 00:05:44.817 "iobuf_get_stats", 00:05:44.817 "iobuf_set_options", 00:05:44.817 "keyring_get_keys", 00:05:44.817 "framework_get_pci_devices", 00:05:44.817 "framework_get_config", 00:05:44.817 "framework_get_subsystems", 00:05:44.817 "fsdev_set_opts", 00:05:44.817 "fsdev_get_opts", 00:05:44.817 "trace_get_info", 00:05:44.817 "trace_get_tpoint_group_mask", 00:05:44.817 "trace_disable_tpoint_group", 00:05:44.817 "trace_enable_tpoint_group", 00:05:44.817 "trace_clear_tpoint_mask", 00:05:44.817 "trace_set_tpoint_mask", 00:05:44.817 "notify_get_notifications", 00:05:44.817 "notify_get_types", 00:05:44.817 "spdk_get_version", 00:05:44.817 "rpc_get_methods" 00:05:44.817 ] 00:05:44.817 17:56:01 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.817 17:56:01 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:44.817 17:56:01 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58921 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 58921 ']' 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 58921 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 58921 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:44.817 killing process with pid 58921 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 58921' 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 58921 00:05:44.817 17:56:01 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 58921 00:05:47.343 00:05:47.343 real 0m3.852s 00:05:47.343 user 0m7.070s 00:05:47.343 sys 0m0.522s 00:05:47.343 17:56:03 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:47.343 17:56:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:47.343 ************************************ 00:05:47.343 END TEST spdkcli_tcp 00:05:47.343 ************************************ 00:05:47.343 17:56:03 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.343 17:56:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:47.343 17:56:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:47.343 17:56:03 -- common/autotest_common.sh@10 -- # set +x 00:05:47.343 ************************************ 00:05:47.343 START TEST dpdk_mem_utility 00:05:47.343 ************************************ 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:47.343 * Looking for test storage... 00:05:47.343 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:47.343 17:56:03 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:47.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.343 --rc genhtml_branch_coverage=1 00:05:47.343 --rc genhtml_function_coverage=1 00:05:47.343 --rc genhtml_legend=1 00:05:47.343 --rc geninfo_all_blocks=1 00:05:47.343 --rc geninfo_unexecuted_blocks=1 00:05:47.343 00:05:47.343 ' 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:47.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.343 --rc genhtml_branch_coverage=1 00:05:47.343 --rc genhtml_function_coverage=1 00:05:47.343 --rc genhtml_legend=1 00:05:47.343 --rc geninfo_all_blocks=1 00:05:47.343 --rc geninfo_unexecuted_blocks=1 00:05:47.343 00:05:47.343 ' 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:47.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.343 --rc genhtml_branch_coverage=1 00:05:47.343 --rc genhtml_function_coverage=1 00:05:47.343 --rc genhtml_legend=1 00:05:47.343 --rc geninfo_all_blocks=1 00:05:47.343 --rc geninfo_unexecuted_blocks=1 00:05:47.343 00:05:47.343 ' 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:47.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:47.343 --rc genhtml_branch_coverage=1 00:05:47.343 --rc genhtml_function_coverage=1 00:05:47.343 --rc genhtml_legend=1 00:05:47.343 --rc geninfo_all_blocks=1 00:05:47.343 --rc geninfo_unexecuted_blocks=1 00:05:47.343 00:05:47.343 ' 00:05:47.343 17:56:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:47.343 17:56:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59042 00:05:47.343 17:56:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:47.343 17:56:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59042 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 59042 ']' 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:47.343 17:56:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:47.343 [2024-10-28 17:56:03.637407] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:47.343 [2024-10-28 17:56:03.638107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59042 ] 00:05:47.602 [2024-10-28 17:56:03.826090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.602 [2024-10-28 17:56:03.951074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.536 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:48.536 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:05:48.536 17:56:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:48.536 17:56:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:48.536 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:48.536 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:48.536 { 00:05:48.536 "filename": "/tmp/spdk_mem_dump.txt" 00:05:48.536 } 00:05:48.536 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:48.536 17:56:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:48.536 DPDK memory size 816.000000 MiB in 1 heap(s) 00:05:48.536 1 heaps totaling size 816.000000 MiB 00:05:48.536 size: 816.000000 MiB heap id: 0 00:05:48.536 end heaps---------- 00:05:48.536 9 mempools totaling size 595.772034 MiB 00:05:48.536 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:48.536 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:48.536 size: 92.545471 MiB name: bdev_io_59042 00:05:48.536 size: 50.003479 MiB name: msgpool_59042 00:05:48.536 size: 36.509338 MiB name: fsdev_io_59042 00:05:48.536 size: 21.763794 MiB name: PDU_Pool 00:05:48.536 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:48.536 size: 4.133484 MiB name: evtpool_59042 00:05:48.536 size: 0.026123 MiB name: Session_Pool 00:05:48.536 end mempools------- 00:05:48.536 6 memzones totaling size 4.142822 MiB 00:05:48.536 size: 1.000366 MiB name: RG_ring_0_59042 00:05:48.536 size: 1.000366 MiB name: RG_ring_1_59042 00:05:48.536 size: 1.000366 MiB name: RG_ring_4_59042 00:05:48.536 size: 1.000366 MiB name: RG_ring_5_59042 00:05:48.536 size: 0.125366 MiB name: RG_ring_2_59042 00:05:48.536 size: 0.015991 MiB name: RG_ring_3_59042 00:05:48.536 end memzones------- 00:05:48.536 17:56:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:48.536 heap id: 0 total size: 816.000000 MiB number of busy elements: 314 number of free elements: 18 00:05:48.536 list of free elements. size: 16.791626 MiB 00:05:48.536 element at address: 0x200006400000 with size: 1.995972 MiB 00:05:48.536 element at address: 0x20000a600000 with size: 1.995972 MiB 00:05:48.536 element at address: 0x200003e00000 with size: 1.991028 MiB 00:05:48.536 element at address: 0x200018d00040 with size: 0.999939 MiB 00:05:48.536 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:48.536 element at address: 0x200019200000 with size: 0.999084 MiB 00:05:48.536 element at address: 0x200031e00000 with size: 0.994324 MiB 00:05:48.536 element at address: 0x200000400000 with size: 0.992004 MiB 00:05:48.536 element at address: 0x200018a00000 with size: 0.959656 MiB 00:05:48.536 element at address: 0x200019500040 with size: 0.936401 MiB 00:05:48.536 element at address: 0x200000200000 with size: 0.716980 MiB 00:05:48.536 element at address: 0x20001ac00000 with size: 0.561951 MiB 00:05:48.536 element at address: 0x200000c00000 with size: 0.490173 MiB 00:05:48.536 element at address: 0x200018e00000 with size: 0.487976 MiB 00:05:48.536 element at address: 0x200019600000 with size: 0.485413 MiB 00:05:48.536 element at address: 0x200012c00000 with size: 0.443481 MiB 00:05:48.536 element at address: 0x200028000000 with size: 0.390442 MiB 00:05:48.536 element at address: 0x200000800000 with size: 0.350891 MiB 00:05:48.536 list of standard malloc elements. size: 199.287476 MiB 00:05:48.536 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:05:48.536 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:05:48.536 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:05:48.536 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:48.536 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:48.536 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:48.536 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:05:48.536 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:48.536 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:05:48.536 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:05:48.536 element at address: 0x200012bff040 with size: 0.000305 MiB 00:05:48.536 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:05:48.536 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:05:48.536 element at address: 0x200000cff000 with size: 0.000244 MiB 00:05:48.536 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bff180 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bff280 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bff380 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bff480 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bff580 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bff680 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bff780 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bff880 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bff980 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012c71880 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012c71980 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012c72080 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012c72180 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:05:48.537 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:05:48.537 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200028063f40 with size: 0.000244 MiB 00:05:48.537 element at address: 0x200028064040 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806af80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806b080 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806b180 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806b280 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806b380 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806b480 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806b580 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806b680 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806b780 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806b880 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806b980 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806be80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:05:48.537 element at address: 0x20002806c080 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806c180 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806c280 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806c380 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806c480 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806c580 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806c680 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806c780 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806c880 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806c980 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806d080 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806d180 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806d280 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806d380 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806d480 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806d580 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806d680 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806d780 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806d880 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806d980 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806da80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806db80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806de80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806df80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806e080 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806e180 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806e280 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806e380 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806e480 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806e580 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806e680 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806e780 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806e880 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806e980 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806f080 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806f180 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806f280 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806f380 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806f480 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806f580 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806f680 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806f780 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806f880 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806f980 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:05:48.538 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:05:48.538 list of memzone associated elements. size: 599.920898 MiB 00:05:48.538 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:05:48.538 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:48.538 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:05:48.538 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:48.538 element at address: 0x200012df4740 with size: 92.045105 MiB 00:05:48.538 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_59042_0 00:05:48.538 element at address: 0x200000dff340 with size: 48.003113 MiB 00:05:48.538 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59042_0 00:05:48.538 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:05:48.538 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59042_0 00:05:48.538 element at address: 0x2000197be900 with size: 20.255615 MiB 00:05:48.538 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:48.538 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:05:48.538 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:48.538 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:05:48.538 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59042_0 00:05:48.538 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:05:48.538 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59042 00:05:48.538 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:48.538 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59042 00:05:48.538 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:48.538 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:48.538 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:05:48.538 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:48.538 element at address: 0x200018afde00 with size: 1.008179 MiB 00:05:48.538 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:48.538 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:05:48.538 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:48.538 element at address: 0x200000cff100 with size: 1.000549 MiB 00:05:48.538 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59042 00:05:48.538 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:05:48.538 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59042 00:05:48.538 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:05:48.538 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59042 00:05:48.538 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:05:48.538 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59042 00:05:48.538 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:05:48.538 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59042 00:05:48.538 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:05:48.538 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59042 00:05:48.538 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:05:48.538 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:48.538 element at address: 0x200012c72280 with size: 0.500549 MiB 00:05:48.538 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:48.538 element at address: 0x20001967c440 with size: 0.250549 MiB 00:05:48.538 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:48.538 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:05:48.538 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59042 00:05:48.538 element at address: 0x20000085df80 with size: 0.125549 MiB 00:05:48.538 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59042 00:05:48.538 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:05:48.538 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:48.538 element at address: 0x200028064140 with size: 0.023804 MiB 00:05:48.538 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:48.538 element at address: 0x200000859d40 with size: 0.016174 MiB 00:05:48.538 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59042 00:05:48.538 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:05:48.538 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:48.538 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:05:48.538 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59042 00:05:48.538 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:05:48.538 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59042 00:05:48.538 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:05:48.538 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59042 00:05:48.538 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:05:48.538 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:48.538 17:56:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:48.538 17:56:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59042 00:05:48.538 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 59042 ']' 00:05:48.538 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 59042 00:05:48.538 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:05:48.538 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:48.538 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59042 00:05:48.538 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:05:48.538 killing process with pid 59042 00:05:48.538 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:05:48.538 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59042' 00:05:48.538 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 59042 00:05:48.538 17:56:04 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 59042 00:05:51.066 00:05:51.066 real 0m3.666s 00:05:51.066 user 0m3.801s 00:05:51.066 sys 0m0.495s 00:05:51.066 17:56:06 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:51.066 ************************************ 00:05:51.066 END TEST dpdk_mem_utility 00:05:51.066 ************************************ 00:05:51.066 17:56:06 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:51.066 17:56:07 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.066 17:56:07 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:51.066 17:56:07 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.066 17:56:07 -- common/autotest_common.sh@10 -- # set +x 00:05:51.066 ************************************ 00:05:51.066 START TEST event 00:05:51.066 ************************************ 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:51.066 * Looking for test storage... 00:05:51.066 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1691 -- # lcov --version 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:51.066 17:56:07 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.066 17:56:07 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.066 17:56:07 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.066 17:56:07 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.066 17:56:07 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.066 17:56:07 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.066 17:56:07 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.066 17:56:07 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.066 17:56:07 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.066 17:56:07 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.066 17:56:07 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.066 17:56:07 event -- scripts/common.sh@344 -- # case "$op" in 00:05:51.066 17:56:07 event -- scripts/common.sh@345 -- # : 1 00:05:51.066 17:56:07 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.066 17:56:07 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.066 17:56:07 event -- scripts/common.sh@365 -- # decimal 1 00:05:51.066 17:56:07 event -- scripts/common.sh@353 -- # local d=1 00:05:51.066 17:56:07 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.066 17:56:07 event -- scripts/common.sh@355 -- # echo 1 00:05:51.066 17:56:07 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.066 17:56:07 event -- scripts/common.sh@366 -- # decimal 2 00:05:51.066 17:56:07 event -- scripts/common.sh@353 -- # local d=2 00:05:51.066 17:56:07 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.066 17:56:07 event -- scripts/common.sh@355 -- # echo 2 00:05:51.066 17:56:07 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.066 17:56:07 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.066 17:56:07 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.066 17:56:07 event -- scripts/common.sh@368 -- # return 0 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:51.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.066 --rc genhtml_branch_coverage=1 00:05:51.066 --rc genhtml_function_coverage=1 00:05:51.066 --rc genhtml_legend=1 00:05:51.066 --rc geninfo_all_blocks=1 00:05:51.066 --rc geninfo_unexecuted_blocks=1 00:05:51.066 00:05:51.066 ' 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:51.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.066 --rc genhtml_branch_coverage=1 00:05:51.066 --rc genhtml_function_coverage=1 00:05:51.066 --rc genhtml_legend=1 00:05:51.066 --rc geninfo_all_blocks=1 00:05:51.066 --rc geninfo_unexecuted_blocks=1 00:05:51.066 00:05:51.066 ' 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:51.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.066 --rc genhtml_branch_coverage=1 00:05:51.066 --rc genhtml_function_coverage=1 00:05:51.066 --rc genhtml_legend=1 00:05:51.066 --rc geninfo_all_blocks=1 00:05:51.066 --rc geninfo_unexecuted_blocks=1 00:05:51.066 00:05:51.066 ' 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:51.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.066 --rc genhtml_branch_coverage=1 00:05:51.066 --rc genhtml_function_coverage=1 00:05:51.066 --rc genhtml_legend=1 00:05:51.066 --rc geninfo_all_blocks=1 00:05:51.066 --rc geninfo_unexecuted_blocks=1 00:05:51.066 00:05:51.066 ' 00:05:51.066 17:56:07 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:51.066 17:56:07 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:51.066 17:56:07 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:05:51.066 17:56:07 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:51.066 17:56:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.066 ************************************ 00:05:51.066 START TEST event_perf 00:05:51.066 ************************************ 00:05:51.066 17:56:07 event.event_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:51.066 Running I/O for 1 seconds...[2024-10-28 17:56:07.243004] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:51.066 [2024-10-28 17:56:07.243156] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59140 ] 00:05:51.066 [2024-10-28 17:56:07.431147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:51.325 [2024-10-28 17:56:07.561922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:51.325 [2024-10-28 17:56:07.562031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:51.325 Running I/O for 1 seconds...[2024-10-28 17:56:07.562603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:51.325 [2024-10-28 17:56:07.562642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.699 00:05:52.699 lcore 0: 190862 00:05:52.699 lcore 1: 190861 00:05:52.699 lcore 2: 190863 00:05:52.699 lcore 3: 190863 00:05:52.699 done. 00:05:52.699 00:05:52.699 real 0m1.609s 00:05:52.699 user 0m4.366s 00:05:52.699 sys 0m0.109s 00:05:52.699 17:56:08 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:52.699 ************************************ 00:05:52.699 END TEST event_perf 00:05:52.699 ************************************ 00:05:52.699 17:56:08 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:52.699 17:56:08 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:52.699 17:56:08 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:52.699 17:56:08 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:52.699 17:56:08 event -- common/autotest_common.sh@10 -- # set +x 00:05:52.699 ************************************ 00:05:52.699 START TEST event_reactor 00:05:52.699 ************************************ 00:05:52.699 17:56:08 event.event_reactor -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:52.699 [2024-10-28 17:56:08.901164] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:52.699 [2024-10-28 17:56:08.901371] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59185 ] 00:05:52.699 [2024-10-28 17:56:09.083571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.957 [2024-10-28 17:56:09.187358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.330 test_start 00:05:54.330 oneshot 00:05:54.330 tick 100 00:05:54.330 tick 100 00:05:54.330 tick 250 00:05:54.330 tick 100 00:05:54.330 tick 100 00:05:54.330 tick 100 00:05:54.330 tick 250 00:05:54.330 tick 500 00:05:54.330 tick 100 00:05:54.330 tick 100 00:05:54.330 tick 250 00:05:54.330 tick 100 00:05:54.330 tick 100 00:05:54.330 test_end 00:05:54.330 00:05:54.330 real 0m1.558s 00:05:54.330 user 0m1.349s 00:05:54.330 sys 0m0.099s 00:05:54.330 17:56:10 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:54.330 ************************************ 00:05:54.330 END TEST event_reactor 00:05:54.330 ************************************ 00:05:54.331 17:56:10 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:54.331 17:56:10 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.331 17:56:10 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:05:54.331 17:56:10 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:54.331 17:56:10 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.331 ************************************ 00:05:54.331 START TEST event_reactor_perf 00:05:54.331 ************************************ 00:05:54.331 17:56:10 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:54.331 [2024-10-28 17:56:10.493334] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:54.331 [2024-10-28 17:56:10.493472] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59222 ] 00:05:54.331 [2024-10-28 17:56:10.667031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.331 [2024-10-28 17:56:10.771123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.707 test_start 00:05:55.707 test_end 00:05:55.707 Performance: 273053 events per second 00:05:55.707 00:05:55.707 real 0m1.539s 00:05:55.707 user 0m1.345s 00:05:55.707 sys 0m0.084s 00:05:55.707 17:56:11 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:55.707 17:56:11 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.707 ************************************ 00:05:55.707 END TEST event_reactor_perf 00:05:55.707 ************************************ 00:05:55.707 17:56:12 event -- event/event.sh@49 -- # uname -s 00:05:55.707 17:56:12 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:55.707 17:56:12 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:55.707 17:56:12 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:55.708 17:56:12 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:55.708 17:56:12 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.708 ************************************ 00:05:55.708 START TEST event_scheduler 00:05:55.708 ************************************ 00:05:55.708 17:56:12 event.event_scheduler -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:55.708 * Looking for test storage... 00:05:55.708 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:55.708 17:56:12 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:05:55.708 17:56:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:05:55.708 17:56:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:05:55.966 17:56:12 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:55.966 17:56:12 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:55.966 17:56:12 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:55.966 17:56:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:05:55.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.966 --rc genhtml_branch_coverage=1 00:05:55.966 --rc genhtml_function_coverage=1 00:05:55.966 --rc genhtml_legend=1 00:05:55.966 --rc geninfo_all_blocks=1 00:05:55.966 --rc geninfo_unexecuted_blocks=1 00:05:55.966 00:05:55.966 ' 00:05:55.966 17:56:12 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:05:55.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.966 --rc genhtml_branch_coverage=1 00:05:55.966 --rc genhtml_function_coverage=1 00:05:55.966 --rc genhtml_legend=1 00:05:55.966 --rc geninfo_all_blocks=1 00:05:55.966 --rc geninfo_unexecuted_blocks=1 00:05:55.966 00:05:55.966 ' 00:05:55.966 17:56:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:05:55.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.966 --rc genhtml_branch_coverage=1 00:05:55.966 --rc genhtml_function_coverage=1 00:05:55.966 --rc genhtml_legend=1 00:05:55.966 --rc geninfo_all_blocks=1 00:05:55.966 --rc geninfo_unexecuted_blocks=1 00:05:55.966 00:05:55.966 ' 00:05:55.966 17:56:12 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:05:55.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:55.966 --rc genhtml_branch_coverage=1 00:05:55.966 --rc genhtml_function_coverage=1 00:05:55.966 --rc genhtml_legend=1 00:05:55.967 --rc geninfo_all_blocks=1 00:05:55.967 --rc geninfo_unexecuted_blocks=1 00:05:55.967 00:05:55.967 ' 00:05:55.967 17:56:12 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:55.967 17:56:12 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59292 00:05:55.967 17:56:12 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:55.967 17:56:12 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:55.967 17:56:12 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59292 00:05:55.967 17:56:12 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 59292 ']' 00:05:55.967 17:56:12 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.967 17:56:12 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:05:55.967 17:56:12 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.967 17:56:12 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:05:55.967 17:56:12 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:55.967 [2024-10-28 17:56:12.340813] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:05:55.967 [2024-10-28 17:56:12.340984] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59292 ] 00:05:56.225 [2024-10-28 17:56:12.515743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.225 [2024-10-28 17:56:12.623689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.225 [2024-10-28 17:56:12.623860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.225 [2024-10-28 17:56:12.624344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.225 [2024-10-28 17:56:12.624348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.157 17:56:13 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:05:57.157 17:56:13 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:05:57.157 17:56:13 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:57.157 17:56:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.157 17:56:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.157 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.157 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.157 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.157 POWER: Cannot set governor of lcore 0 to performance 00:05:57.157 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.157 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.157 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:57.157 POWER: Cannot set governor of lcore 0 to userspace 00:05:57.157 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:57.157 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:57.157 POWER: Unable to set Power Management Environment for lcore 0 00:05:57.157 [2024-10-28 17:56:13.438212] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:57.157 [2024-10-28 17:56:13.438244] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:57.157 [2024-10-28 17:56:13.438259] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:57.157 [2024-10-28 17:56:13.438282] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:57.157 [2024-10-28 17:56:13.438294] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:57.157 [2024-10-28 17:56:13.438319] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:57.157 17:56:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.157 17:56:13 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:57.157 17:56:13 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.157 17:56:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 [2024-10-28 17:56:13.724980] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:57.415 17:56:13 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:57.415 17:56:13 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:57.415 17:56:13 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 ************************************ 00:05:57.415 START TEST scheduler_create_thread 00:05:57.415 ************************************ 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 2 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 3 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 4 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 5 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 6 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 7 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 8 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 9 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 10 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:57.415 17:56:13 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.790 17:56:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:58.790 00:05:58.790 real 0m1.175s 00:05:58.790 user 0m0.012s 00:05:58.790 sys 0m0.006s 00:05:58.790 17:56:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:58.790 17:56:14 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:58.790 ************************************ 00:05:58.790 END TEST scheduler_create_thread 00:05:58.790 ************************************ 00:05:58.790 17:56:14 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:58.790 17:56:14 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59292 00:05:58.790 17:56:14 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 59292 ']' 00:05:58.790 17:56:14 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 59292 00:05:58.790 17:56:14 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:05:58.790 17:56:14 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:05:58.790 17:56:14 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59292 00:05:58.790 17:56:14 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:05:58.790 17:56:14 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:05:58.790 killing process with pid 59292 00:05:58.790 17:56:14 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59292' 00:05:58.790 17:56:14 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 59292 00:05:58.790 17:56:14 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 59292 00:05:59.049 [2024-10-28 17:56:15.388449] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.984 00:05:59.984 real 0m4.365s 00:05:59.984 user 0m7.824s 00:05:59.984 sys 0m0.420s 00:05:59.984 ************************************ 00:05:59.984 END TEST event_scheduler 00:05:59.984 17:56:16 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:05:59.984 17:56:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.984 ************************************ 00:05:59.984 17:56:16 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.984 17:56:16 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.984 17:56:16 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:05:59.984 17:56:16 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:05:59.984 17:56:16 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.984 ************************************ 00:05:59.984 START TEST app_repeat 00:05:59.984 ************************************ 00:05:59.984 17:56:16 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:05:59.984 17:56:16 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.984 17:56:16 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.984 17:56:16 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:59.984 17:56:16 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.984 17:56:16 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:59.984 17:56:16 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:59.984 17:56:16 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:00.243 17:56:16 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59387 00:06:00.243 17:56:16 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:00.243 17:56:16 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.243 17:56:16 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59387' 00:06:00.243 Process app_repeat pid: 59387 00:06:00.243 17:56:16 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:00.243 spdk_app_start Round 0 00:06:00.243 17:56:16 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:00.243 17:56:16 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59387 /var/tmp/spdk-nbd.sock 00:06:00.243 17:56:16 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59387 ']' 00:06:00.243 17:56:16 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.243 17:56:16 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:00.243 17:56:16 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.243 17:56:16 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:00.243 17:56:16 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:00.243 [2024-10-28 17:56:16.534123] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:00.243 [2024-10-28 17:56:16.534337] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59387 ] 00:06:00.501 [2024-10-28 17:56:16.734165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:00.501 [2024-10-28 17:56:16.838868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.501 [2024-10-28 17:56:16.838875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.434 17:56:17 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:01.434 17:56:17 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:01.434 17:56:17 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.691 Malloc0 00:06:01.691 17:56:17 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:01.967 Malloc1 00:06:01.967 17:56:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.967 17:56:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:02.224 /dev/nbd0 00:06:02.224 17:56:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:02.224 17:56:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:02.224 17:56:18 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:02.224 17:56:18 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:02.224 17:56:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:02.224 17:56:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:02.224 17:56:18 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:02.224 17:56:18 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:02.224 17:56:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:02.224 17:56:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:02.224 17:56:18 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.224 1+0 records in 00:06:02.224 1+0 records out 00:06:02.224 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000351811 s, 11.6 MB/s 00:06:02.225 17:56:18 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.225 17:56:18 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:02.225 17:56:18 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.482 17:56:18 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:02.482 17:56:18 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:02.482 17:56:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.482 17:56:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.482 17:56:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:02.740 /dev/nbd1 00:06:02.740 17:56:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:02.740 17:56:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:02.740 1+0 records in 00:06:02.740 1+0 records out 00:06:02.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000332844 s, 12.3 MB/s 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:02.740 17:56:18 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:02.740 17:56:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:02.740 17:56:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:02.740 17:56:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:02.740 17:56:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:02.740 17:56:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.997 17:56:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:02.997 { 00:06:02.997 "nbd_device": "/dev/nbd0", 00:06:02.997 "bdev_name": "Malloc0" 00:06:02.997 }, 00:06:02.997 { 00:06:02.997 "nbd_device": "/dev/nbd1", 00:06:02.997 "bdev_name": "Malloc1" 00:06:02.997 } 00:06:02.997 ]' 00:06:02.997 17:56:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:02.997 { 00:06:02.997 "nbd_device": "/dev/nbd0", 00:06:02.997 "bdev_name": "Malloc0" 00:06:02.997 }, 00:06:02.997 { 00:06:02.997 "nbd_device": "/dev/nbd1", 00:06:02.997 "bdev_name": "Malloc1" 00:06:02.997 } 00:06:02.997 ]' 00:06:02.997 17:56:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.997 17:56:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:02.997 /dev/nbd1' 00:06:02.997 17:56:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:02.997 /dev/nbd1' 00:06:02.997 17:56:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.997 17:56:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:02.998 256+0 records in 00:06:02.998 256+0 records out 00:06:02.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00958801 s, 109 MB/s 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:02.998 256+0 records in 00:06:02.998 256+0 records out 00:06:02.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304976 s, 34.4 MB/s 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:02.998 256+0 records in 00:06:02.998 256+0 records out 00:06:02.998 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.036091 s, 29.1 MB/s 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:02.998 17:56:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:03.255 17:56:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:03.255 17:56:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:03.255 17:56:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.255 17:56:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:03.255 17:56:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:03.255 17:56:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:03.255 17:56:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.255 17:56:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:03.513 17:56:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:03.513 17:56:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:03.513 17:56:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:03.513 17:56:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.513 17:56:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.513 17:56:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:03.513 17:56:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.513 17:56:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.513 17:56:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:03.513 17:56:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:03.771 17:56:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:03.771 17:56:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:03.771 17:56:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:03.771 17:56:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.771 17:56:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.771 17:56:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:03.771 17:56:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:03.771 17:56:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.771 17:56:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.771 17:56:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.771 17:56:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:04.029 17:56:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:04.029 17:56:20 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:04.595 17:56:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:05.570 [2024-10-28 17:56:21.888917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.570 [2024-10-28 17:56:21.991048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.570 [2024-10-28 17:56:21.991066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.828 [2024-10-28 17:56:22.172957] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:05.828 [2024-10-28 17:56:22.173066] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:07.729 17:56:23 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:07.729 spdk_app_start Round 1 00:06:07.729 17:56:23 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:07.729 17:56:23 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59387 /var/tmp/spdk-nbd.sock 00:06:07.729 17:56:23 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59387 ']' 00:06:07.729 17:56:23 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:07.729 17:56:23 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:07.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:07.729 17:56:23 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:07.729 17:56:23 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:07.729 17:56:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:07.729 17:56:24 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:07.729 17:56:24 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:07.729 17:56:24 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.294 Malloc0 00:06:08.294 17:56:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:08.555 Malloc1 00:06:08.555 17:56:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:08.555 17:56:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:08.812 /dev/nbd0 00:06:09.070 17:56:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:09.070 17:56:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.070 1+0 records in 00:06:09.070 1+0 records out 00:06:09.070 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000302775 s, 13.5 MB/s 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:09.070 17:56:25 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:09.070 17:56:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.070 17:56:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.070 17:56:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:09.328 /dev/nbd1 00:06:09.328 17:56:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:09.328 17:56:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:09.328 1+0 records in 00:06:09.328 1+0 records out 00:06:09.328 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265502 s, 15.4 MB/s 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:09.328 17:56:25 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:09.328 17:56:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:09.328 17:56:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:09.329 17:56:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:09.329 17:56:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.329 17:56:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:09.586 17:56:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:09.586 { 00:06:09.586 "nbd_device": "/dev/nbd0", 00:06:09.586 "bdev_name": "Malloc0" 00:06:09.586 }, 00:06:09.586 { 00:06:09.586 "nbd_device": "/dev/nbd1", 00:06:09.586 "bdev_name": "Malloc1" 00:06:09.587 } 00:06:09.587 ]' 00:06:09.587 17:56:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:09.587 { 00:06:09.587 "nbd_device": "/dev/nbd0", 00:06:09.587 "bdev_name": "Malloc0" 00:06:09.587 }, 00:06:09.587 { 00:06:09.587 "nbd_device": "/dev/nbd1", 00:06:09.587 "bdev_name": "Malloc1" 00:06:09.587 } 00:06:09.587 ]' 00:06:09.587 17:56:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:09.587 /dev/nbd1' 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:09.587 /dev/nbd1' 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:09.587 256+0 records in 00:06:09.587 256+0 records out 00:06:09.587 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00782415 s, 134 MB/s 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.587 17:56:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:09.845 256+0 records in 00:06:09.845 256+0 records out 00:06:09.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0272637 s, 38.5 MB/s 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:09.845 256+0 records in 00:06:09.845 256+0 records out 00:06:09.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0363474 s, 28.8 MB/s 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:09.845 17:56:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:10.117 17:56:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:10.117 17:56:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:10.117 17:56:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:10.117 17:56:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.117 17:56:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.117 17:56:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:10.117 17:56:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.117 17:56:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.117 17:56:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:10.117 17:56:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:10.375 17:56:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:10.375 17:56:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:10.375 17:56:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:10.375 17:56:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:10.375 17:56:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:10.375 17:56:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:10.375 17:56:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:10.375 17:56:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:10.375 17:56:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.375 17:56:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.375 17:56:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:10.634 17:56:27 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:10.634 17:56:27 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.199 17:56:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:12.132 [2024-10-28 17:56:28.525693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.390 [2024-10-28 17:56:28.624708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.390 [2024-10-28 17:56:28.624720] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.390 [2024-10-28 17:56:28.790177] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:12.390 [2024-10-28 17:56:28.790237] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:14.289 spdk_app_start Round 2 00:06:14.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.289 17:56:30 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.289 17:56:30 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:14.289 17:56:30 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59387 /var/tmp/spdk-nbd.sock 00:06:14.289 17:56:30 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59387 ']' 00:06:14.289 17:56:30 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.289 17:56:30 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:14.289 17:56:30 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.289 17:56:30 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:14.289 17:56:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.547 17:56:30 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:14.547 17:56:30 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:14.547 17:56:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:14.805 Malloc0 00:06:14.805 17:56:31 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.062 Malloc1 00:06:15.062 17:56:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.062 17:56:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.628 /dev/nbd0 00:06:15.628 17:56:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.628 17:56:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.628 1+0 records in 00:06:15.628 1+0 records out 00:06:15.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000291528 s, 14.1 MB/s 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:15.628 17:56:31 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:15.628 17:56:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.628 17:56:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.628 17:56:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.886 /dev/nbd1 00:06:15.886 17:56:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.886 17:56:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.886 1+0 records in 00:06:15.886 1+0 records out 00:06:15.886 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000336924 s, 12.2 MB/s 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:06:15.886 17:56:32 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:06:15.886 17:56:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.886 17:56:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.886 17:56:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.886 17:56:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.886 17:56:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:16.144 { 00:06:16.144 "nbd_device": "/dev/nbd0", 00:06:16.144 "bdev_name": "Malloc0" 00:06:16.144 }, 00:06:16.144 { 00:06:16.144 "nbd_device": "/dev/nbd1", 00:06:16.144 "bdev_name": "Malloc1" 00:06:16.144 } 00:06:16.144 ]' 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:16.144 { 00:06:16.144 "nbd_device": "/dev/nbd0", 00:06:16.144 "bdev_name": "Malloc0" 00:06:16.144 }, 00:06:16.144 { 00:06:16.144 "nbd_device": "/dev/nbd1", 00:06:16.144 "bdev_name": "Malloc1" 00:06:16.144 } 00:06:16.144 ]' 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:16.144 /dev/nbd1' 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:16.144 /dev/nbd1' 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:16.144 256+0 records in 00:06:16.144 256+0 records out 00:06:16.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0068448 s, 153 MB/s 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:16.144 256+0 records in 00:06:16.144 256+0 records out 00:06:16.144 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0306315 s, 34.2 MB/s 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:16.144 17:56:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:16.402 256+0 records in 00:06:16.402 256+0 records out 00:06:16.402 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0413133 s, 25.4 MB/s 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.402 17:56:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.660 17:56:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.660 17:56:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.660 17:56:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.660 17:56:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.660 17:56:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.660 17:56:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.660 17:56:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.660 17:56:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.660 17:56:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.660 17:56:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.917 17:56:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.917 17:56:33 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.917 17:56:33 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.917 17:56:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.917 17:56:33 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.917 17:56:33 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.917 17:56:33 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.917 17:56:33 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.917 17:56:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.917 17:56:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.917 17:56:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:17.482 17:56:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:17.483 17:56:33 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:17.740 17:56:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.114 [2024-10-28 17:56:35.245036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.114 [2024-10-28 17:56:35.344155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.114 [2024-10-28 17:56:35.344169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.114 [2024-10-28 17:56:35.508400] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.114 [2024-10-28 17:56:35.508502] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.016 17:56:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59387 /var/tmp/spdk-nbd.sock 00:06:21.016 17:56:37 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 59387 ']' 00:06:21.016 17:56:37 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.016 17:56:37 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:21.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.016 17:56:37 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.016 17:56:37 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:21.016 17:56:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:06:21.275 17:56:37 event.app_repeat -- event/event.sh@39 -- # killprocess 59387 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 59387 ']' 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 59387 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59387 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:21.275 killing process with pid 59387 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59387' 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@971 -- # kill 59387 00:06:21.275 17:56:37 event.app_repeat -- common/autotest_common.sh@976 -- # wait 59387 00:06:22.209 spdk_app_start is called in Round 0. 00:06:22.209 Shutdown signal received, stop current app iteration 00:06:22.209 Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 reinitialization... 00:06:22.209 spdk_app_start is called in Round 1. 00:06:22.209 Shutdown signal received, stop current app iteration 00:06:22.209 Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 reinitialization... 00:06:22.209 spdk_app_start is called in Round 2. 00:06:22.209 Shutdown signal received, stop current app iteration 00:06:22.209 Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 reinitialization... 00:06:22.209 spdk_app_start is called in Round 3. 00:06:22.209 Shutdown signal received, stop current app iteration 00:06:22.209 17:56:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:22.209 17:56:38 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:22.209 00:06:22.209 real 0m22.005s 00:06:22.209 user 0m49.603s 00:06:22.209 sys 0m2.756s 00:06:22.209 17:56:38 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:22.209 17:56:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:22.209 ************************************ 00:06:22.209 END TEST app_repeat 00:06:22.209 ************************************ 00:06:22.209 17:56:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:22.209 17:56:38 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.209 17:56:38 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.209 17:56:38 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.209 17:56:38 event -- common/autotest_common.sh@10 -- # set +x 00:06:22.209 ************************************ 00:06:22.209 START TEST cpu_locks 00:06:22.209 ************************************ 00:06:22.209 17:56:38 event.cpu_locks -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:22.209 * Looking for test storage... 00:06:22.209 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:22.210 17:56:38 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:22.210 17:56:38 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:06:22.210 17:56:38 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:22.210 17:56:38 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:22.210 17:56:38 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:22.210 17:56:38 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:22.210 17:56:38 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:22.210 17:56:38 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:22.210 17:56:38 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:22.210 17:56:38 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:22.210 17:56:38 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:22.210 17:56:38 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:22.468 17:56:38 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:22.468 17:56:38 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:22.468 17:56:38 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:22.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.468 --rc genhtml_branch_coverage=1 00:06:22.468 --rc genhtml_function_coverage=1 00:06:22.468 --rc genhtml_legend=1 00:06:22.468 --rc geninfo_all_blocks=1 00:06:22.468 --rc geninfo_unexecuted_blocks=1 00:06:22.468 00:06:22.468 ' 00:06:22.468 17:56:38 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:22.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.468 --rc genhtml_branch_coverage=1 00:06:22.468 --rc genhtml_function_coverage=1 00:06:22.468 --rc genhtml_legend=1 00:06:22.468 --rc geninfo_all_blocks=1 00:06:22.468 --rc geninfo_unexecuted_blocks=1 00:06:22.468 00:06:22.468 ' 00:06:22.468 17:56:38 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:22.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.468 --rc genhtml_branch_coverage=1 00:06:22.468 --rc genhtml_function_coverage=1 00:06:22.468 --rc genhtml_legend=1 00:06:22.468 --rc geninfo_all_blocks=1 00:06:22.468 --rc geninfo_unexecuted_blocks=1 00:06:22.468 00:06:22.468 ' 00:06:22.468 17:56:38 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:22.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:22.468 --rc genhtml_branch_coverage=1 00:06:22.468 --rc genhtml_function_coverage=1 00:06:22.468 --rc genhtml_legend=1 00:06:22.468 --rc geninfo_all_blocks=1 00:06:22.468 --rc geninfo_unexecuted_blocks=1 00:06:22.468 00:06:22.468 ' 00:06:22.468 17:56:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:22.468 17:56:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:22.468 17:56:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:22.468 17:56:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:22.468 17:56:38 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:22.468 17:56:38 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:22.468 17:56:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.468 ************************************ 00:06:22.468 START TEST default_locks 00:06:22.468 ************************************ 00:06:22.468 17:56:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:06:22.468 17:56:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59864 00:06:22.468 17:56:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59864 00:06:22.468 17:56:38 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59864 ']' 00:06:22.468 17:56:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:22.468 17:56:38 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.468 17:56:38 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:22.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.468 17:56:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.468 17:56:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:22.468 17:56:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:22.468 [2024-10-28 17:56:38.859974] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:22.468 [2024-10-28 17:56:38.860146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59864 ] 00:06:22.726 [2024-10-28 17:56:39.037198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.726 [2024-10-28 17:56:39.139891] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.657 17:56:39 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:23.657 17:56:39 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:06:23.657 17:56:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59864 00:06:23.657 17:56:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59864 00:06:23.657 17:56:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:23.915 17:56:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59864 00:06:23.915 17:56:40 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 59864 ']' 00:06:23.915 17:56:40 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 59864 00:06:23.915 17:56:40 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:06:23.915 17:56:40 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:23.915 17:56:40 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59864 00:06:23.915 17:56:40 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:23.915 17:56:40 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:23.915 killing process with pid 59864 00:06:23.915 17:56:40 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59864' 00:06:23.915 17:56:40 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 59864 00:06:23.915 17:56:40 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 59864 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59864 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59864 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 59864 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 59864 ']' 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:26.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.440 ERROR: process (pid: 59864) is no longer running 00:06:26.440 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (59864) - No such process 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.440 ************************************ 00:06:26.440 END TEST default_locks 00:06:26.440 ************************************ 00:06:26.440 00:06:26.440 real 0m3.679s 00:06:26.440 user 0m3.840s 00:06:26.440 sys 0m0.624s 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:26.440 17:56:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.440 17:56:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:26.440 17:56:42 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:26.440 17:56:42 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:26.440 17:56:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:26.440 ************************************ 00:06:26.440 START TEST default_locks_via_rpc 00:06:26.440 ************************************ 00:06:26.440 17:56:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:06:26.440 17:56:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59941 00:06:26.440 17:56:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.440 17:56:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59941 00:06:26.440 17:56:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 59941 ']' 00:06:26.440 17:56:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.440 17:56:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:26.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.440 17:56:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.440 17:56:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:26.440 17:56:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.440 [2024-10-28 17:56:42.536001] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:26.440 [2024-10-28 17:56:42.536156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59941 ] 00:06:26.440 [2024-10-28 17:56:42.710748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.440 [2024-10-28 17:56:42.822052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59941 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59941 00:06:27.374 17:56:43 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.631 17:56:44 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59941 00:06:27.631 17:56:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 59941 ']' 00:06:27.631 17:56:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 59941 00:06:27.631 17:56:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:06:27.631 17:56:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:27.631 17:56:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 59941 00:06:27.631 17:56:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:27.631 17:56:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:27.631 17:56:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 59941' 00:06:27.631 killing process with pid 59941 00:06:27.631 17:56:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 59941 00:06:27.631 17:56:44 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 59941 00:06:30.172 00:06:30.172 real 0m3.730s 00:06:30.172 user 0m3.918s 00:06:30.172 sys 0m0.556s 00:06:30.172 17:56:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:30.172 17:56:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:30.172 ************************************ 00:06:30.172 END TEST default_locks_via_rpc 00:06:30.172 ************************************ 00:06:30.172 17:56:46 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:30.172 17:56:46 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:30.172 17:56:46 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:30.172 17:56:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:30.172 ************************************ 00:06:30.172 START TEST non_locking_app_on_locked_coremask 00:06:30.172 ************************************ 00:06:30.172 17:56:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:06:30.172 17:56:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60010 00:06:30.172 17:56:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:30.173 17:56:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60010 /var/tmp/spdk.sock 00:06:30.173 17:56:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60010 ']' 00:06:30.173 17:56:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.173 17:56:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:30.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.173 17:56:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.173 17:56:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:30.173 17:56:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.173 [2024-10-28 17:56:46.332268] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:30.173 [2024-10-28 17:56:46.332418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60010 ] 00:06:30.173 [2024-10-28 17:56:46.499278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.173 [2024-10-28 17:56:46.601245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.106 17:56:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:31.106 17:56:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:31.106 17:56:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60031 00:06:31.106 17:56:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60031 /var/tmp/spdk2.sock 00:06:31.106 17:56:47 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:31.106 17:56:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60031 ']' 00:06:31.106 17:56:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.106 17:56:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:31.106 17:56:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.106 17:56:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:31.106 17:56:47 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.106 [2024-10-28 17:56:47.471509] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:31.106 [2024-10-28 17:56:47.471665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60031 ] 00:06:31.363 [2024-10-28 17:56:47.669158] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:31.363 [2024-10-28 17:56:47.669227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.621 [2024-10-28 17:56:47.874757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.994 17:56:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:32.994 17:56:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:32.994 17:56:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60010 00:06:32.994 17:56:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60010 00:06:32.994 17:56:49 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:33.928 17:56:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60010 00:06:33.928 17:56:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60010 ']' 00:06:33.928 17:56:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60010 00:06:33.928 17:56:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:33.928 17:56:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:33.928 17:56:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60010 00:06:33.928 17:56:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:33.928 17:56:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:33.928 killing process with pid 60010 00:06:33.928 17:56:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60010' 00:06:33.928 17:56:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60010 00:06:33.928 17:56:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60010 00:06:38.145 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60031 00:06:38.145 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60031 ']' 00:06:38.145 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60031 00:06:38.145 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:38.145 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:38.145 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60031 00:06:38.145 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:38.145 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:38.145 killing process with pid 60031 00:06:38.145 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60031' 00:06:38.145 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60031 00:06:38.145 17:56:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60031 00:06:40.093 00:06:40.093 real 0m10.250s 00:06:40.093 user 0m10.859s 00:06:40.093 sys 0m1.183s 00:06:40.093 17:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:40.093 17:56:56 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.093 ************************************ 00:06:40.093 END TEST non_locking_app_on_locked_coremask 00:06:40.093 ************************************ 00:06:40.093 17:56:56 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:40.093 17:56:56 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:40.093 17:56:56 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:40.093 17:56:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:40.093 ************************************ 00:06:40.093 START TEST locking_app_on_unlocked_coremask 00:06:40.093 ************************************ 00:06:40.093 17:56:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:06:40.093 17:56:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60162 00:06:40.093 17:56:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:40.093 17:56:56 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60162 /var/tmp/spdk.sock 00:06:40.093 17:56:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60162 ']' 00:06:40.093 17:56:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.093 17:56:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:40.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.093 17:56:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.093 17:56:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:40.093 17:56:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.351 [2024-10-28 17:56:56.621619] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:40.351 [2024-10-28 17:56:56.621765] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60162 ] 00:06:40.351 [2024-10-28 17:56:56.800071] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.351 [2024-10-28 17:56:56.800136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.607 [2024-10-28 17:56:56.925020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.539 17:56:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:41.539 17:56:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:41.539 17:56:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60178 00:06:41.539 17:56:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:41.539 17:56:57 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60178 /var/tmp/spdk2.sock 00:06:41.539 17:56:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60178 ']' 00:06:41.539 17:56:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.539 17:56:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:41.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.539 17:56:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.539 17:56:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:41.539 17:56:57 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.539 [2024-10-28 17:56:57.807456] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:41.539 [2024-10-28 17:56:57.807671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60178 ] 00:06:41.539 [2024-10-28 17:56:58.007957] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.796 [2024-10-28 17:56:58.213075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.736 17:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:43.736 17:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:43.736 17:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60178 00:06:43.736 17:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60178 00:06:43.737 17:56:59 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:44.302 17:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60162 00:06:44.303 17:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60162 ']' 00:06:44.303 17:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60162 00:06:44.303 17:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:44.303 17:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:44.303 17:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60162 00:06:44.303 17:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:44.303 killing process with pid 60162 00:06:44.303 17:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:44.303 17:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60162' 00:06:44.303 17:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60162 00:06:44.303 17:57:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60162 00:06:48.490 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60178 00:06:48.490 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60178 ']' 00:06:48.490 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 60178 00:06:48.490 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:48.490 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:48.490 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60178 00:06:48.490 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:48.490 killing process with pid 60178 00:06:48.490 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:48.490 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60178' 00:06:48.490 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 60178 00:06:48.490 17:57:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 60178 00:06:51.067 00:06:51.067 real 0m10.393s 00:06:51.067 user 0m10.999s 00:06:51.067 sys 0m1.212s 00:06:51.067 ************************************ 00:06:51.067 END TEST locking_app_on_unlocked_coremask 00:06:51.067 ************************************ 00:06:51.067 17:57:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:51.067 17:57:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.067 17:57:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:51.067 17:57:06 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:51.067 17:57:06 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:51.068 17:57:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.068 ************************************ 00:06:51.068 START TEST locking_app_on_locked_coremask 00:06:51.068 ************************************ 00:06:51.068 17:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:06:51.068 17:57:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60313 00:06:51.068 17:57:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60313 /var/tmp/spdk.sock 00:06:51.068 17:57:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.068 17:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60313 ']' 00:06:51.068 17:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.068 17:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:51.068 17:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.068 17:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:51.068 17:57:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.068 [2024-10-28 17:57:07.111086] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:51.068 [2024-10-28 17:57:07.111267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60313 ] 00:06:51.068 [2024-10-28 17:57:07.290928] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.068 [2024-10-28 17:57:07.394100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60329 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60329 /var/tmp/spdk2.sock 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60329 /var/tmp/spdk2.sock 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60329 /var/tmp/spdk2.sock 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 60329 ']' 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:52.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:52.001 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.001 [2024-10-28 17:57:08.262671] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:52.002 [2024-10-28 17:57:08.263342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60329 ] 00:06:52.002 [2024-10-28 17:57:08.462975] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60313 has claimed it. 00:06:52.002 [2024-10-28 17:57:08.463057] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:52.569 ERROR: process (pid: 60329) is no longer running 00:06:52.569 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60329) - No such process 00:06:52.569 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:52.569 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:52.569 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:52.569 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:52.569 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:52.569 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:52.569 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60313 00:06:52.569 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60313 00:06:52.569 17:57:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.135 17:57:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60313 00:06:53.135 17:57:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 60313 ']' 00:06:53.135 17:57:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 60313 00:06:53.135 17:57:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:06:53.135 17:57:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:53.135 17:57:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60313 00:06:53.135 17:57:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:53.135 killing process with pid 60313 00:06:53.135 17:57:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:53.135 17:57:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60313' 00:06:53.135 17:57:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 60313 00:06:53.135 17:57:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 60313 00:06:55.078 00:06:55.078 real 0m4.554s 00:06:55.078 user 0m5.122s 00:06:55.078 sys 0m0.714s 00:06:55.078 ************************************ 00:06:55.078 END TEST locking_app_on_locked_coremask 00:06:55.078 ************************************ 00:06:55.078 17:57:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:55.078 17:57:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.078 17:57:11 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:55.078 17:57:11 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:55.078 17:57:11 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:55.078 17:57:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:55.336 ************************************ 00:06:55.336 START TEST locking_overlapped_coremask 00:06:55.336 ************************************ 00:06:55.336 17:57:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:06:55.336 17:57:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60398 00:06:55.336 17:57:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60398 /var/tmp/spdk.sock 00:06:55.336 17:57:11 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:55.336 17:57:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60398 ']' 00:06:55.336 17:57:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.336 17:57:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:55.336 17:57:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.336 17:57:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:55.336 17:57:11 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:55.336 [2024-10-28 17:57:11.685602] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:55.336 [2024-10-28 17:57:11.685740] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60398 ] 00:06:55.595 [2024-10-28 17:57:11.864326] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:55.595 [2024-10-28 17:57:11.995743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.595 [2024-10-28 17:57:11.995892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.595 [2024-10-28 17:57:11.995908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60422 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60422 /var/tmp/spdk2.sock 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 60422 /var/tmp/spdk2.sock 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 60422 /var/tmp/spdk2.sock 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 60422 ']' 00:06:56.530 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:56.531 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:56.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:56.531 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:56.531 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:56.531 17:57:12 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.531 [2024-10-28 17:57:12.947619] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:56.531 [2024-10-28 17:57:12.947809] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60422 ] 00:06:56.788 [2024-10-28 17:57:13.149820] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60398 has claimed it. 00:06:56.788 [2024-10-28 17:57:13.149917] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:57.374 ERROR: process (pid: 60422) is no longer running 00:06:57.374 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 848: kill: (60422) - No such process 00:06:57.374 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:06:57.374 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:06:57.374 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:57.374 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:57.374 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:57.374 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:57.374 17:57:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:57.374 17:57:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:57.374 17:57:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60398 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 60398 ']' 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 60398 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60398 00:06:57.375 killing process with pid 60398 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60398' 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 60398 00:06:57.375 17:57:13 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 60398 00:06:59.272 00:06:59.272 real 0m4.134s 00:06:59.272 user 0m11.327s 00:06:59.272 sys 0m0.566s 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.272 ************************************ 00:06:59.272 END TEST locking_overlapped_coremask 00:06:59.272 ************************************ 00:06:59.272 17:57:15 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:59.272 17:57:15 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:06:59.272 17:57:15 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:06:59.272 17:57:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:59.272 ************************************ 00:06:59.272 START TEST locking_overlapped_coremask_via_rpc 00:06:59.272 ************************************ 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60486 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60486 /var/tmp/spdk.sock 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60486 ']' 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:06:59.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:06:59.272 17:57:15 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.531 [2024-10-28 17:57:15.864712] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:06:59.531 [2024-10-28 17:57:15.864899] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60486 ] 00:06:59.789 [2024-10-28 17:57:16.079254] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:59.789 [2024-10-28 17:57:16.079335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:59.789 [2024-10-28 17:57:16.185143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:59.790 [2024-10-28 17:57:16.185251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.790 [2024-10-28 17:57:16.185262] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:00.751 17:57:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:00.751 17:57:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:00.751 17:57:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60504 00:07:00.751 17:57:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:00.751 17:57:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60504 /var/tmp/spdk2.sock 00:07:00.751 17:57:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60504 ']' 00:07:00.751 17:57:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:00.751 17:57:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:00.751 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:00.751 17:57:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:00.751 17:57:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:00.751 17:57:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:00.751 [2024-10-28 17:57:17.095484] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:00.751 [2024-10-28 17:57:17.095665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60504 ] 00:07:01.009 [2024-10-28 17:57:17.298756] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.009 [2024-10-28 17:57:17.298820] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.269 [2024-10-28 17:57:17.523499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.269 [2024-10-28 17:57:17.526956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.269 [2024-10-28 17:57:17.526974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.642 [2024-10-28 17:57:19.040076] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60486 has claimed it. 00:07:02.642 request: 00:07:02.642 { 00:07:02.642 "method": "framework_enable_cpumask_locks", 00:07:02.642 "req_id": 1 00:07:02.642 } 00:07:02.642 Got JSON-RPC error response 00:07:02.642 response: 00:07:02.642 { 00:07:02.642 "code": -32603, 00:07:02.642 "message": "Failed to claim CPU core: 2" 00:07:02.642 } 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60486 /var/tmp/spdk.sock 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60486 ']' 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:02.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:02.642 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.901 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:02.901 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:02.901 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60504 /var/tmp/spdk2.sock 00:07:02.901 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 60504 ']' 00:07:02.901 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.901 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:02.901 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.901 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:02.901 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.468 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:03.468 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:07:03.468 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:03.468 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:03.468 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:03.468 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:03.468 00:07:03.468 real 0m3.969s 00:07:03.468 user 0m1.688s 00:07:03.468 sys 0m0.209s 00:07:03.468 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:03.468 17:57:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.468 ************************************ 00:07:03.468 END TEST locking_overlapped_coremask_via_rpc 00:07:03.468 ************************************ 00:07:03.468 17:57:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:03.468 17:57:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60486 ]] 00:07:03.468 17:57:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60486 00:07:03.468 17:57:19 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60486 ']' 00:07:03.468 17:57:19 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60486 00:07:03.468 17:57:19 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:03.468 17:57:19 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:03.468 17:57:19 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60486 00:07:03.468 17:57:19 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:03.468 killing process with pid 60486 00:07:03.468 17:57:19 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:03.468 17:57:19 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60486' 00:07:03.468 17:57:19 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60486 00:07:03.468 17:57:19 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60486 00:07:05.397 17:57:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60504 ]] 00:07:05.397 17:57:21 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60504 00:07:05.397 17:57:21 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60504 ']' 00:07:05.397 17:57:21 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60504 00:07:05.397 17:57:21 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:07:05.397 17:57:21 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:05.397 17:57:21 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60504 00:07:05.655 17:57:21 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:07:05.655 17:57:21 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:07:05.655 killing process with pid 60504 00:07:05.656 17:57:21 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60504' 00:07:05.656 17:57:21 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 60504 00:07:05.656 17:57:21 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 60504 00:07:07.557 17:57:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.557 17:57:23 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:07.557 17:57:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60486 ]] 00:07:07.557 17:57:23 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60486 00:07:07.557 17:57:23 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60486 ']' 00:07:07.557 17:57:23 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60486 00:07:07.557 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60486) - No such process 00:07:07.557 Process with pid 60486 is not found 00:07:07.557 17:57:23 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60486 is not found' 00:07:07.557 17:57:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60504 ]] 00:07:07.557 17:57:23 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60504 00:07:07.557 17:57:23 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 60504 ']' 00:07:07.557 17:57:23 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 60504 00:07:07.557 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (60504) - No such process 00:07:07.557 Process with pid 60504 is not found 00:07:07.557 17:57:23 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 60504 is not found' 00:07:07.557 17:57:23 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:07.557 ************************************ 00:07:07.557 END TEST cpu_locks 00:07:07.557 ************************************ 00:07:07.557 00:07:07.557 real 0m45.444s 00:07:07.557 user 1m18.775s 00:07:07.557 sys 0m6.035s 00:07:07.557 17:57:23 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.557 17:57:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.557 00:07:07.557 real 1m16.979s 00:07:07.557 user 2m23.474s 00:07:07.557 sys 0m9.731s 00:07:07.557 17:57:23 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:07.557 17:57:23 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.557 ************************************ 00:07:07.557 END TEST event 00:07:07.557 ************************************ 00:07:07.816 17:57:24 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:07.816 17:57:24 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:07.816 17:57:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.816 17:57:24 -- common/autotest_common.sh@10 -- # set +x 00:07:07.816 ************************************ 00:07:07.816 START TEST thread 00:07:07.816 ************************************ 00:07:07.816 17:57:24 thread -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:07.816 * Looking for test storage... 00:07:07.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:07.816 17:57:24 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:07.816 17:57:24 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:07:07.816 17:57:24 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:07.816 17:57:24 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:07.816 17:57:24 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.816 17:57:24 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.816 17:57:24 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.816 17:57:24 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.816 17:57:24 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.816 17:57:24 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.816 17:57:24 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.816 17:57:24 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.816 17:57:24 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.816 17:57:24 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.816 17:57:24 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.816 17:57:24 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:07.816 17:57:24 thread -- scripts/common.sh@345 -- # : 1 00:07:07.816 17:57:24 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.817 17:57:24 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.817 17:57:24 thread -- scripts/common.sh@365 -- # decimal 1 00:07:07.817 17:57:24 thread -- scripts/common.sh@353 -- # local d=1 00:07:07.817 17:57:24 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.817 17:57:24 thread -- scripts/common.sh@355 -- # echo 1 00:07:07.817 17:57:24 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.817 17:57:24 thread -- scripts/common.sh@366 -- # decimal 2 00:07:07.817 17:57:24 thread -- scripts/common.sh@353 -- # local d=2 00:07:07.817 17:57:24 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.817 17:57:24 thread -- scripts/common.sh@355 -- # echo 2 00:07:07.817 17:57:24 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.817 17:57:24 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.817 17:57:24 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.817 17:57:24 thread -- scripts/common.sh@368 -- # return 0 00:07:07.817 17:57:24 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.817 17:57:24 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.817 --rc genhtml_branch_coverage=1 00:07:07.817 --rc genhtml_function_coverage=1 00:07:07.817 --rc genhtml_legend=1 00:07:07.817 --rc geninfo_all_blocks=1 00:07:07.817 --rc geninfo_unexecuted_blocks=1 00:07:07.817 00:07:07.817 ' 00:07:07.817 17:57:24 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.817 --rc genhtml_branch_coverage=1 00:07:07.817 --rc genhtml_function_coverage=1 00:07:07.817 --rc genhtml_legend=1 00:07:07.817 --rc geninfo_all_blocks=1 00:07:07.817 --rc geninfo_unexecuted_blocks=1 00:07:07.817 00:07:07.817 ' 00:07:07.817 17:57:24 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.817 --rc genhtml_branch_coverage=1 00:07:07.817 --rc genhtml_function_coverage=1 00:07:07.817 --rc genhtml_legend=1 00:07:07.817 --rc geninfo_all_blocks=1 00:07:07.817 --rc geninfo_unexecuted_blocks=1 00:07:07.817 00:07:07.817 ' 00:07:07.817 17:57:24 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:07.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.817 --rc genhtml_branch_coverage=1 00:07:07.817 --rc genhtml_function_coverage=1 00:07:07.817 --rc genhtml_legend=1 00:07:07.817 --rc geninfo_all_blocks=1 00:07:07.817 --rc geninfo_unexecuted_blocks=1 00:07:07.817 00:07:07.817 ' 00:07:07.817 17:57:24 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.817 17:57:24 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:07.817 17:57:24 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:07.817 17:57:24 thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.817 ************************************ 00:07:07.817 START TEST thread_poller_perf 00:07:07.817 ************************************ 00:07:07.817 17:57:24 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:07.817 [2024-10-28 17:57:24.257643] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:07.817 [2024-10-28 17:57:24.257798] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60688 ] 00:07:08.076 [2024-10-28 17:57:24.442627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.076 [2024-10-28 17:57:24.550447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.076 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:09.452 [2024-10-28T17:57:25.930Z] ====================================== 00:07:09.452 [2024-10-28T17:57:25.930Z] busy:2210085641 (cyc) 00:07:09.452 [2024-10-28T17:57:25.930Z] total_run_count: 288000 00:07:09.452 [2024-10-28T17:57:25.930Z] tsc_hz: 2200000000 (cyc) 00:07:09.452 [2024-10-28T17:57:25.930Z] ====================================== 00:07:09.452 [2024-10-28T17:57:25.930Z] poller_cost: 7673 (cyc), 3487 (nsec) 00:07:09.452 00:07:09.452 real 0m1.568s 00:07:09.452 user 0m1.379s 00:07:09.452 sys 0m0.081s 00:07:09.452 17:57:25 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:09.452 17:57:25 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.452 ************************************ 00:07:09.452 END TEST thread_poller_perf 00:07:09.452 ************************************ 00:07:09.452 17:57:25 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:09.452 17:57:25 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:07:09.452 17:57:25 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:09.452 17:57:25 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.452 ************************************ 00:07:09.452 START TEST thread_poller_perf 00:07:09.452 ************************************ 00:07:09.452 17:57:25 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:09.452 [2024-10-28 17:57:25.896271] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:09.452 [2024-10-28 17:57:25.896514] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60719 ] 00:07:09.710 [2024-10-28 17:57:26.099115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.969 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:09.969 [2024-10-28 17:57:26.202454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.361 [2024-10-28T17:57:27.839Z] ====================================== 00:07:11.361 [2024-10-28T17:57:27.839Z] busy:2203702909 (cyc) 00:07:11.361 [2024-10-28T17:57:27.839Z] total_run_count: 3670000 00:07:11.361 [2024-10-28T17:57:27.839Z] tsc_hz: 2200000000 (cyc) 00:07:11.361 [2024-10-28T17:57:27.840Z] ====================================== 00:07:11.362 [2024-10-28T17:57:27.840Z] poller_cost: 600 (cyc), 272 (nsec) 00:07:11.362 00:07:11.362 real 0m1.597s 00:07:11.362 user 0m1.363s 00:07:11.362 sys 0m0.124s 00:07:11.362 17:57:27 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.362 ************************************ 00:07:11.362 END TEST thread_poller_perf 00:07:11.362 17:57:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:11.362 ************************************ 00:07:11.362 17:57:27 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:11.362 00:07:11.362 real 0m3.428s 00:07:11.362 user 0m2.868s 00:07:11.362 sys 0m0.341s 00:07:11.362 17:57:27 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:11.362 17:57:27 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.362 ************************************ 00:07:11.362 END TEST thread 00:07:11.362 ************************************ 00:07:11.362 17:57:27 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:11.362 17:57:27 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:11.362 17:57:27 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:11.362 17:57:27 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:11.362 17:57:27 -- common/autotest_common.sh@10 -- # set +x 00:07:11.362 ************************************ 00:07:11.362 START TEST app_cmdline 00:07:11.362 ************************************ 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:11.362 * Looking for test storage... 00:07:11.362 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.362 17:57:27 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:11.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.362 --rc genhtml_branch_coverage=1 00:07:11.362 --rc genhtml_function_coverage=1 00:07:11.362 --rc genhtml_legend=1 00:07:11.362 --rc geninfo_all_blocks=1 00:07:11.362 --rc geninfo_unexecuted_blocks=1 00:07:11.362 00:07:11.362 ' 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:11.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.362 --rc genhtml_branch_coverage=1 00:07:11.362 --rc genhtml_function_coverage=1 00:07:11.362 --rc genhtml_legend=1 00:07:11.362 --rc geninfo_all_blocks=1 00:07:11.362 --rc geninfo_unexecuted_blocks=1 00:07:11.362 00:07:11.362 ' 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:11.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.362 --rc genhtml_branch_coverage=1 00:07:11.362 --rc genhtml_function_coverage=1 00:07:11.362 --rc genhtml_legend=1 00:07:11.362 --rc geninfo_all_blocks=1 00:07:11.362 --rc geninfo_unexecuted_blocks=1 00:07:11.362 00:07:11.362 ' 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:11.362 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.362 --rc genhtml_branch_coverage=1 00:07:11.362 --rc genhtml_function_coverage=1 00:07:11.362 --rc genhtml_legend=1 00:07:11.362 --rc geninfo_all_blocks=1 00:07:11.362 --rc geninfo_unexecuted_blocks=1 00:07:11.362 00:07:11.362 ' 00:07:11.362 17:57:27 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:11.362 17:57:27 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60808 00:07:11.362 17:57:27 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60808 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 60808 ']' 00:07:11.362 17:57:27 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:11.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:11.362 17:57:27 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.621 [2024-10-28 17:57:27.840764] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:11.621 [2024-10-28 17:57:27.840952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60808 ] 00:07:11.621 [2024-10-28 17:57:28.027001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.880 [2024-10-28 17:57:28.152124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.446 17:57:28 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:12.446 17:57:28 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:07:12.446 17:57:28 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:12.705 { 00:07:12.705 "version": "SPDK v25.01-pre git sha1 d490b5576", 00:07:12.705 "fields": { 00:07:12.705 "major": 25, 00:07:12.705 "minor": 1, 00:07:12.705 "patch": 0, 00:07:12.705 "suffix": "-pre", 00:07:12.705 "commit": "d490b5576" 00:07:12.705 } 00:07:12.705 } 00:07:12.705 17:57:29 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:12.705 17:57:29 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:12.705 17:57:29 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:12.705 17:57:29 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:12.705 17:57:29 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:12.705 17:57:29 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:12.705 17:57:29 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:12.705 17:57:29 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.705 17:57:29 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:12.963 17:57:29 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:12.963 17:57:29 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:12.963 17:57:29 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:12.963 17:57:29 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:13.221 request: 00:07:13.221 { 00:07:13.221 "method": "env_dpdk_get_mem_stats", 00:07:13.221 "req_id": 1 00:07:13.221 } 00:07:13.221 Got JSON-RPC error response 00:07:13.221 response: 00:07:13.221 { 00:07:13.221 "code": -32601, 00:07:13.221 "message": "Method not found" 00:07:13.221 } 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:13.221 17:57:29 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60808 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 60808 ']' 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 60808 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60808 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:13.221 killing process with pid 60808 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60808' 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@971 -- # kill 60808 00:07:13.221 17:57:29 app_cmdline -- common/autotest_common.sh@976 -- # wait 60808 00:07:15.755 00:07:15.755 real 0m4.127s 00:07:15.755 user 0m4.688s 00:07:15.755 sys 0m0.565s 00:07:15.755 17:57:31 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:15.755 17:57:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.755 ************************************ 00:07:15.755 END TEST app_cmdline 00:07:15.755 ************************************ 00:07:15.755 17:57:31 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:15.755 17:57:31 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:15.755 17:57:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.755 17:57:31 -- common/autotest_common.sh@10 -- # set +x 00:07:15.755 ************************************ 00:07:15.755 START TEST version 00:07:15.755 ************************************ 00:07:15.755 17:57:31 version -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:15.755 * Looking for test storage... 00:07:15.755 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:15.755 17:57:31 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:15.755 17:57:31 version -- common/autotest_common.sh@1691 -- # lcov --version 00:07:15.755 17:57:31 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:15.755 17:57:31 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:15.755 17:57:31 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.755 17:57:31 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.755 17:57:31 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.755 17:57:31 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.755 17:57:31 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.755 17:57:31 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.755 17:57:31 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.755 17:57:31 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.755 17:57:31 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.755 17:57:31 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.755 17:57:31 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.755 17:57:31 version -- scripts/common.sh@344 -- # case "$op" in 00:07:15.755 17:57:31 version -- scripts/common.sh@345 -- # : 1 00:07:15.755 17:57:31 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.755 17:57:31 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.755 17:57:31 version -- scripts/common.sh@365 -- # decimal 1 00:07:15.755 17:57:31 version -- scripts/common.sh@353 -- # local d=1 00:07:15.755 17:57:31 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.755 17:57:31 version -- scripts/common.sh@355 -- # echo 1 00:07:15.755 17:57:31 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.755 17:57:31 version -- scripts/common.sh@366 -- # decimal 2 00:07:15.755 17:57:31 version -- scripts/common.sh@353 -- # local d=2 00:07:15.755 17:57:31 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.755 17:57:31 version -- scripts/common.sh@355 -- # echo 2 00:07:15.755 17:57:31 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.755 17:57:31 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.755 17:57:31 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.755 17:57:31 version -- scripts/common.sh@368 -- # return 0 00:07:15.755 17:57:31 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.755 17:57:31 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:15.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.755 --rc genhtml_branch_coverage=1 00:07:15.755 --rc genhtml_function_coverage=1 00:07:15.755 --rc genhtml_legend=1 00:07:15.755 --rc geninfo_all_blocks=1 00:07:15.755 --rc geninfo_unexecuted_blocks=1 00:07:15.755 00:07:15.755 ' 00:07:15.755 17:57:31 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:15.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.755 --rc genhtml_branch_coverage=1 00:07:15.755 --rc genhtml_function_coverage=1 00:07:15.755 --rc genhtml_legend=1 00:07:15.755 --rc geninfo_all_blocks=1 00:07:15.755 --rc geninfo_unexecuted_blocks=1 00:07:15.755 00:07:15.755 ' 00:07:15.755 17:57:31 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:15.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.755 --rc genhtml_branch_coverage=1 00:07:15.755 --rc genhtml_function_coverage=1 00:07:15.755 --rc genhtml_legend=1 00:07:15.755 --rc geninfo_all_blocks=1 00:07:15.755 --rc geninfo_unexecuted_blocks=1 00:07:15.755 00:07:15.755 ' 00:07:15.755 17:57:31 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:15.755 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.755 --rc genhtml_branch_coverage=1 00:07:15.755 --rc genhtml_function_coverage=1 00:07:15.755 --rc genhtml_legend=1 00:07:15.755 --rc geninfo_all_blocks=1 00:07:15.755 --rc geninfo_unexecuted_blocks=1 00:07:15.755 00:07:15.755 ' 00:07:15.755 17:57:31 version -- app/version.sh@17 -- # get_header_version major 00:07:15.755 17:57:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:15.755 17:57:31 version -- app/version.sh@14 -- # cut -f2 00:07:15.755 17:57:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.755 17:57:31 version -- app/version.sh@17 -- # major=25 00:07:15.755 17:57:31 version -- app/version.sh@18 -- # get_header_version minor 00:07:15.755 17:57:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:15.755 17:57:31 version -- app/version.sh@14 -- # cut -f2 00:07:15.755 17:57:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.755 17:57:31 version -- app/version.sh@18 -- # minor=1 00:07:15.756 17:57:31 version -- app/version.sh@19 -- # get_header_version patch 00:07:15.756 17:57:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:15.756 17:57:31 version -- app/version.sh@14 -- # cut -f2 00:07:15.756 17:57:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.756 17:57:31 version -- app/version.sh@19 -- # patch=0 00:07:15.756 17:57:31 version -- app/version.sh@20 -- # get_header_version suffix 00:07:15.756 17:57:31 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:15.756 17:57:31 version -- app/version.sh@14 -- # cut -f2 00:07:15.756 17:57:31 version -- app/version.sh@14 -- # tr -d '"' 00:07:15.756 17:57:31 version -- app/version.sh@20 -- # suffix=-pre 00:07:15.756 17:57:31 version -- app/version.sh@22 -- # version=25.1 00:07:15.756 17:57:31 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:15.756 17:57:31 version -- app/version.sh@28 -- # version=25.1rc0 00:07:15.756 17:57:31 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:15.756 17:57:31 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:15.756 17:57:31 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:15.756 17:57:31 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:15.756 00:07:15.756 real 0m0.257s 00:07:15.756 user 0m0.177s 00:07:15.756 sys 0m0.115s 00:07:15.756 17:57:31 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:15.756 17:57:31 version -- common/autotest_common.sh@10 -- # set +x 00:07:15.756 ************************************ 00:07:15.756 END TEST version 00:07:15.756 ************************************ 00:07:15.756 17:57:31 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:15.756 17:57:31 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:15.756 17:57:31 -- spdk/autotest.sh@194 -- # uname -s 00:07:15.756 17:57:31 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:15.756 17:57:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:15.756 17:57:31 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:15.756 17:57:31 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:07:15.756 17:57:31 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:15.756 17:57:31 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:15.756 17:57:31 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:15.756 17:57:31 -- common/autotest_common.sh@10 -- # set +x 00:07:15.756 ************************************ 00:07:15.756 START TEST blockdev_nvme 00:07:15.756 ************************************ 00:07:15.756 17:57:31 blockdev_nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:07:15.756 * Looking for test storage... 00:07:15.756 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.756 17:57:32 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.756 --rc genhtml_branch_coverage=1 00:07:15.756 --rc genhtml_function_coverage=1 00:07:15.756 --rc genhtml_legend=1 00:07:15.756 --rc geninfo_all_blocks=1 00:07:15.756 --rc geninfo_unexecuted_blocks=1 00:07:15.756 00:07:15.756 ' 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.756 --rc genhtml_branch_coverage=1 00:07:15.756 --rc genhtml_function_coverage=1 00:07:15.756 --rc genhtml_legend=1 00:07:15.756 --rc geninfo_all_blocks=1 00:07:15.756 --rc geninfo_unexecuted_blocks=1 00:07:15.756 00:07:15.756 ' 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.756 --rc genhtml_branch_coverage=1 00:07:15.756 --rc genhtml_function_coverage=1 00:07:15.756 --rc genhtml_legend=1 00:07:15.756 --rc geninfo_all_blocks=1 00:07:15.756 --rc geninfo_unexecuted_blocks=1 00:07:15.756 00:07:15.756 ' 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:15.756 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.756 --rc genhtml_branch_coverage=1 00:07:15.756 --rc genhtml_function_coverage=1 00:07:15.756 --rc genhtml_legend=1 00:07:15.756 --rc geninfo_all_blocks=1 00:07:15.756 --rc geninfo_unexecuted_blocks=1 00:07:15.756 00:07:15.756 ' 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:15.756 17:57:32 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@673 -- # uname -s 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@681 -- # test_type=nvme 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@683 -- # dek= 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == bdev ]] 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@689 -- # [[ nvme == crypto_* ]] 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=60997 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:15.756 17:57:32 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 60997 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@833 -- # '[' -z 60997 ']' 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:15.756 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:15.756 17:57:32 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:16.015 [2024-10-28 17:57:32.309606] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:16.015 [2024-10-28 17:57:32.309757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60997 ] 00:07:16.015 [2024-10-28 17:57:32.478713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.273 [2024-10-28 17:57:32.579972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.231 17:57:33 blockdev_nvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:17.231 17:57:33 blockdev_nvme -- common/autotest_common.sh@866 -- # return 0 00:07:17.231 17:57:33 blockdev_nvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:07:17.231 17:57:33 blockdev_nvme -- bdev/blockdev.sh@698 -- # setup_nvme_conf 00:07:17.231 17:57:33 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:07:17.231 17:57:33 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:07:17.231 17:57:33 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:17.231 17:57:33 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:07:17.231 17:57:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.231 17:57:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.231 17:57:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.231 17:57:33 blockdev_nvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:07:17.231 17:57:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.231 17:57:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.231 17:57:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.231 17:57:33 blockdev_nvme -- bdev/blockdev.sh@739 -- # cat 00:07:17.231 17:57:33 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:07:17.231 17:57:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.231 17:57:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.490 17:57:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.490 17:57:33 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:07:17.490 17:57:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.490 17:57:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.490 17:57:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.490 17:57:33 blockdev_nvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:17.490 17:57:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.490 17:57:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.490 17:57:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.490 17:57:33 blockdev_nvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:07:17.490 17:57:33 blockdev_nvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:07:17.490 17:57:33 blockdev_nvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.490 17:57:33 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:17.490 17:57:33 blockdev_nvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:07:17.490 17:57:33 blockdev_nvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.490 17:57:33 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:07:17.490 17:57:33 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:07:17.491 17:57:33 blockdev_nvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "03efa91f-e86c-4627-98d6-bf80f4e079ac"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "03efa91f-e86c-4627-98d6-bf80f4e079ac",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "05c07de7-f4f5-4c62-9a51-acab21db11e8"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "05c07de7-f4f5-4c62-9a51-acab21db11e8",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "6c8acf2f-9564-4fc1-8dcf-15446f1a0639"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "6c8acf2f-9564-4fc1-8dcf-15446f1a0639",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "eb153293-0fa7-4891-9835-a2b076ad4f71"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "eb153293-0fa7-4891-9835-a2b076ad4f71",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "157c02c4-8df8-4a6a-a9b9-cb66070541c3"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "157c02c4-8df8-4a6a-a9b9-cb66070541c3",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "e8694194-094c-496c-962e-e61d220b1376"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "e8694194-094c-496c-962e-e61d220b1376",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:07:17.491 17:57:33 blockdev_nvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:07:17.491 17:57:33 blockdev_nvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:07:17.491 17:57:33 blockdev_nvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:07:17.491 17:57:33 blockdev_nvme -- bdev/blockdev.sh@753 -- # killprocess 60997 00:07:17.491 17:57:33 blockdev_nvme -- common/autotest_common.sh@952 -- # '[' -z 60997 ']' 00:07:17.491 17:57:33 blockdev_nvme -- common/autotest_common.sh@956 -- # kill -0 60997 00:07:17.491 17:57:33 blockdev_nvme -- common/autotest_common.sh@957 -- # uname 00:07:17.491 17:57:33 blockdev_nvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:17.491 17:57:33 blockdev_nvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 60997 00:07:17.491 17:57:33 blockdev_nvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:17.491 17:57:33 blockdev_nvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:17.491 killing process with pid 60997 00:07:17.491 17:57:33 blockdev_nvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 60997' 00:07:17.491 17:57:33 blockdev_nvme -- common/autotest_common.sh@971 -- # kill 60997 00:07:17.491 17:57:33 blockdev_nvme -- common/autotest_common.sh@976 -- # wait 60997 00:07:20.021 17:57:36 blockdev_nvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:20.021 17:57:36 blockdev_nvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:20.021 17:57:36 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:07:20.021 17:57:36 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:20.021 17:57:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:20.021 ************************************ 00:07:20.021 START TEST bdev_hello_world 00:07:20.021 ************************************ 00:07:20.021 17:57:36 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:07:20.021 [2024-10-28 17:57:36.136876] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:20.021 [2024-10-28 17:57:36.137040] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61086 ] 00:07:20.021 [2024-10-28 17:57:36.327113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.021 [2024-10-28 17:57:36.456706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.958 [2024-10-28 17:57:37.093987] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:20.958 [2024-10-28 17:57:37.094078] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:07:20.958 [2024-10-28 17:57:37.094108] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:20.958 [2024-10-28 17:57:37.097501] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:20.958 [2024-10-28 17:57:37.097933] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:20.958 [2024-10-28 17:57:37.097969] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:20.958 [2024-10-28 17:57:37.098240] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:20.958 00:07:20.958 [2024-10-28 17:57:37.098296] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:21.895 00:07:21.895 real 0m2.118s 00:07:21.895 user 0m1.758s 00:07:21.895 sys 0m0.247s 00:07:21.895 17:57:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:21.895 17:57:38 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 ************************************ 00:07:21.895 END TEST bdev_hello_world 00:07:21.895 ************************************ 00:07:21.895 17:57:38 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:07:21.895 17:57:38 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:07:21.895 17:57:38 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:21.895 17:57:38 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 ************************************ 00:07:21.895 START TEST bdev_bounds 00:07:21.895 ************************************ 00:07:21.895 Process bdevio pid: 61134 00:07:21.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61134 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61134' 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61134 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 61134 ']' 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:21.895 17:57:38 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:21.895 [2024-10-28 17:57:38.278092] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:21.895 [2024-10-28 17:57:38.278499] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61134 ] 00:07:22.154 [2024-10-28 17:57:38.465225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.154 [2024-10-28 17:57:38.600189] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.154 [2024-10-28 17:57:38.600296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.154 [2024-10-28 17:57:38.600304] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.091 17:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:23.091 17:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:07:23.091 17:57:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:23.091 I/O targets: 00:07:23.091 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:07:23.091 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:07:23.091 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:23.091 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:23.091 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:07:23.091 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:07:23.091 00:07:23.091 00:07:23.091 CUnit - A unit testing framework for C - Version 2.1-3 00:07:23.091 http://cunit.sourceforge.net/ 00:07:23.091 00:07:23.091 00:07:23.091 Suite: bdevio tests on: Nvme3n1 00:07:23.091 Test: blockdev write read block ...passed 00:07:23.091 Test: blockdev write zeroes read block ...passed 00:07:23.091 Test: blockdev write zeroes read no split ...passed 00:07:23.091 Test: blockdev write zeroes read split ...passed 00:07:23.091 Test: blockdev write zeroes read split partial ...passed 00:07:23.091 Test: blockdev reset ...[2024-10-28 17:57:39.508513] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:07:23.091 passed 00:07:23.091 Test: blockdev write read 8 blocks ...[2024-10-28 17:57:39.512864] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:07:23.091 passed 00:07:23.091 Test: blockdev write read size > 128k ...passed 00:07:23.091 Test: blockdev write read invalid size ...passed 00:07:23.091 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.091 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.091 Test: blockdev write read max offset ...passed 00:07:23.091 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.091 Test: blockdev writev readv 8 blocks ...passed 00:07:23.091 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.091 Test: blockdev writev readv block ...passed 00:07:23.091 Test: blockdev writev readv size > 128k ...passed 00:07:23.091 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.091 Test: blockdev comparev and writev ...[2024-10-28 17:57:39.522223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 passed 00:07:23.091 Test: blockdev nvme passthru rw ...SGL DATA BLOCK ADDRESS 0x2cb60a000 len:0x1000 00:07:23.091 [2024-10-28 17:57:39.522471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.091 passed 00:07:23.091 Test: blockdev nvme passthru vendor specific ...[2024-10-28 17:57:39.523441] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.091 passed 00:07:23.091 Test: blockdev nvme admin passthru ...[2024-10-28 17:57:39.523502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.091 passed 00:07:23.091 Test: blockdev copy ...passed 00:07:23.091 Suite: bdevio tests on: Nvme2n3 00:07:23.092 Test: blockdev write read block ...passed 00:07:23.092 Test: blockdev write zeroes read block ...passed 00:07:23.092 Test: blockdev write zeroes read no split ...passed 00:07:23.092 Test: blockdev write zeroes read split ...passed 00:07:23.354 Test: blockdev write zeroes read split partial ...passed 00:07:23.354 Test: blockdev reset ...[2024-10-28 17:57:39.598637] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:23.354 [2024-10-28 17:57:39.603408] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller passed 00:07:23.354 Test: blockdev write read 8 blocks ...successful. 00:07:23.354 passed 00:07:23.354 Test: blockdev write read size > 128k ...passed 00:07:23.354 Test: blockdev write read invalid size ...passed 00:07:23.354 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.354 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.354 Test: blockdev write read max offset ...passed 00:07:23.354 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.354 Test: blockdev writev readv 8 blocks ...passed 00:07:23.354 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.354 Test: blockdev writev readv block ...passed 00:07:23.354 Test: blockdev writev readv size > 128k ...passed 00:07:23.354 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.354 Test: blockdev comparev and writev ...[2024-10-28 17:57:39.612472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2ae806000 len:0x1000 00:07:23.354 [2024-10-28 17:57:39.612547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.354 passed 00:07:23.354 Test: blockdev nvme passthru rw ...passed 00:07:23.354 Test: blockdev nvme passthru vendor specific ...passed 00:07:23.354 Test: blockdev nvme admin passthru ...[2024-10-28 17:57:39.613424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.354 [2024-10-28 17:57:39.613478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.354 passed 00:07:23.354 Test: blockdev copy ...passed 00:07:23.354 Suite: bdevio tests on: Nvme2n2 00:07:23.354 Test: blockdev write read block ...passed 00:07:23.354 Test: blockdev write zeroes read block ...passed 00:07:23.354 Test: blockdev write zeroes read no split ...passed 00:07:23.354 Test: blockdev write zeroes read split ...passed 00:07:23.354 Test: blockdev write zeroes read split partial ...passed 00:07:23.354 Test: blockdev reset ...[2024-10-28 17:57:39.688075] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:23.354 [2024-10-28 17:57:39.692825] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:23.354 passed 00:07:23.354 Test: blockdev write read 8 blocks ...passed 00:07:23.354 Test: blockdev write read size > 128k ...passed 00:07:23.354 Test: blockdev write read invalid size ...passed 00:07:23.354 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.354 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.354 Test: blockdev write read max offset ...passed 00:07:23.354 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.354 Test: blockdev writev readv 8 blocks ...passed 00:07:23.354 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.354 Test: blockdev writev readv block ...passed 00:07:23.354 Test: blockdev writev readv size > 128k ...passed 00:07:23.354 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.354 Test: blockdev comparev and writev ...[2024-10-28 17:57:39.701870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e6e3c000 len:0x1000 00:07:23.354 [2024-10-28 17:57:39.701943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.354 passed 00:07:23.354 Test: blockdev nvme passthru rw ...passed 00:07:23.354 Test: blockdev nvme passthru vendor specific ...passed 00:07:23.354 Test: blockdev nvme admin passthru ...[2024-10-28 17:57:39.702926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.354 [2024-10-28 17:57:39.702984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.354 passed 00:07:23.354 Test: blockdev copy ...passed 00:07:23.354 Suite: bdevio tests on: Nvme2n1 00:07:23.354 Test: blockdev write read block ...passed 00:07:23.354 Test: blockdev write zeroes read block ...passed 00:07:23.354 Test: blockdev write zeroes read no split ...passed 00:07:23.354 Test: blockdev write zeroes read split ...passed 00:07:23.354 Test: blockdev write zeroes read split partial ...passed 00:07:23.354 Test: blockdev reset ...[2024-10-28 17:57:39.775063] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:07:23.354 [2024-10-28 17:57:39.779553] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:07:23.354 passed 00:07:23.354 Test: blockdev write read 8 blocks ...passed 00:07:23.354 Test: blockdev write read size > 128k ...passed 00:07:23.354 Test: blockdev write read invalid size ...passed 00:07:23.354 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.354 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.354 Test: blockdev write read max offset ...passed 00:07:23.354 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.354 Test: blockdev writev readv 8 blocks ...passed 00:07:23.354 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.355 Test: blockdev writev readv block ...passed 00:07:23.355 Test: blockdev writev readv size > 128k ...passed 00:07:23.355 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.355 Test: blockdev comparev and writev ...[2024-10-28 17:57:39.788527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e6e38000 len:0x1000 00:07:23.355 [2024-10-28 17:57:39.788593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.355 passed 00:07:23.355 Test: blockdev nvme passthru rw ...passed 00:07:23.355 Test: blockdev nvme passthru vendor specific ...[2024-10-28 17:57:39.789453] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.355 [2024-10-28 17:57:39.789498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.355 passed 00:07:23.355 Test: blockdev nvme admin passthru ...passed 00:07:23.355 Test: blockdev copy ...passed 00:07:23.355 Suite: bdevio tests on: Nvme1n1 00:07:23.355 Test: blockdev write read block ...passed 00:07:23.355 Test: blockdev write zeroes read block ...passed 00:07:23.355 Test: blockdev write zeroes read no split ...passed 00:07:23.625 Test: blockdev write zeroes read split ...passed 00:07:23.625 Test: blockdev write zeroes read split partial ...passed 00:07:23.625 Test: blockdev reset ...[2024-10-28 17:57:39.862623] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:07:23.625 [2024-10-28 17:57:39.866492] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:07:23.625 passed 00:07:23.625 Test: blockdev write read 8 blocks ...passed 00:07:23.625 Test: blockdev write read size > 128k ...passed 00:07:23.625 Test: blockdev write read invalid size ...passed 00:07:23.625 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.625 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.625 Test: blockdev write read max offset ...passed 00:07:23.625 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.625 Test: blockdev writev readv 8 blocks ...passed 00:07:23.625 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.625 Test: blockdev writev readv block ...passed 00:07:23.625 Test: blockdev writev readv size > 128k ...passed 00:07:23.625 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.625 Test: blockdev comparev and writev ...[2024-10-28 17:57:39.877566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2e6e34000 len:0x1000 00:07:23.625 [2024-10-28 17:57:39.877636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:07:23.625 passed 00:07:23.625 Test: blockdev nvme passthru rw ...passed 00:07:23.625 Test: blockdev nvme passthru vendor specific ...[2024-10-28 17:57:39.878500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:07:23.625 [2024-10-28 17:57:39.878551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:07:23.625 passed 00:07:23.625 Test: blockdev nvme admin passthru ...passed 00:07:23.625 Test: blockdev copy ...passed 00:07:23.625 Suite: bdevio tests on: Nvme0n1 00:07:23.625 Test: blockdev write read block ...passed 00:07:23.625 Test: blockdev write zeroes read block ...passed 00:07:23.625 Test: blockdev write zeroes read no split ...passed 00:07:23.625 Test: blockdev write zeroes read split ...passed 00:07:23.625 Test: blockdev write zeroes read split partial ...passed 00:07:23.625 Test: blockdev reset ...[2024-10-28 17:57:39.945917] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:07:23.625 [2024-10-28 17:57:39.949554] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:07:23.625 passed 00:07:23.625 Test: blockdev write read 8 blocks ...passed 00:07:23.625 Test: blockdev write read size > 128k ...passed 00:07:23.625 Test: blockdev write read invalid size ...passed 00:07:23.625 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:23.625 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:23.625 Test: blockdev write read max offset ...passed 00:07:23.625 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:23.625 Test: blockdev writev readv 8 blocks ...passed 00:07:23.625 Test: blockdev writev readv 30 x 1block ...passed 00:07:23.625 Test: blockdev writev readv block ...passed 00:07:23.625 Test: blockdev writev readv size > 128k ...passed 00:07:23.625 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:23.625 Test: blockdev comparev and writev ...passed 00:07:23.625 Test: blockdev nvme passthru rw ...[2024-10-28 17:57:39.957627] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:07:23.625 separate metadata which is not supported yet. 00:07:23.625 passed 00:07:23.625 Test: blockdev nvme passthru vendor specific ...passed 00:07:23.625 Test: blockdev nvme admin passthru ...[2024-10-28 17:57:39.958165] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:07:23.625 [2024-10-28 17:57:39.958229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:07:23.625 passed 00:07:23.625 Test: blockdev copy ...passed 00:07:23.625 00:07:23.625 Run Summary: Type Total Ran Passed Failed Inactive 00:07:23.625 suites 6 6 n/a 0 0 00:07:23.625 tests 138 138 138 0 0 00:07:23.625 asserts 893 893 893 0 n/a 00:07:23.625 00:07:23.625 Elapsed time = 1.446 seconds 00:07:23.625 0 00:07:23.625 17:57:39 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61134 00:07:23.625 17:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 61134 ']' 00:07:23.625 17:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 61134 00:07:23.625 17:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:07:23.625 17:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:23.625 17:57:39 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61134 00:07:23.625 killing process with pid 61134 00:07:23.625 17:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:23.625 17:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:23.625 17:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61134' 00:07:23.625 17:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 61134 00:07:23.625 17:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 61134 00:07:24.588 ************************************ 00:07:24.588 END TEST bdev_bounds 00:07:24.588 ************************************ 00:07:24.588 17:57:40 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:07:24.588 00:07:24.588 real 0m2.747s 00:07:24.588 user 0m7.127s 00:07:24.588 sys 0m0.368s 00:07:24.588 17:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:24.588 17:57:40 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:07:24.588 17:57:40 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:24.588 17:57:40 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:24.588 17:57:40 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:24.588 17:57:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:24.588 ************************************ 00:07:24.588 START TEST bdev_nbd 00:07:24.588 ************************************ 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61188 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61188 /var/tmp/spdk-nbd.sock 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 61188 ']' 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:24.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:24.588 17:57:40 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:24.846 [2024-10-28 17:57:41.071195] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:24.846 [2024-10-28 17:57:41.071340] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:24.846 [2024-10-28 17:57:41.250438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.105 [2024-10-28 17:57:41.378699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:25.671 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:25.930 1+0 records in 00:07:25.930 1+0 records out 00:07:25.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000427841 s, 9.6 MB/s 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:25.930 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:07:26.198 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:26.198 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:26.456 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:26.456 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:26.456 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:26.456 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:26.456 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:26.456 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.457 1+0 records in 00:07:26.457 1+0 records out 00:07:26.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000462141 s, 8.9 MB/s 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:26.457 17:57:42 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:07:26.714 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.715 1+0 records in 00:07:26.715 1+0 records out 00:07:26.715 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000662636 s, 6.2 MB/s 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:26.715 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:26.973 1+0 records in 00:07:26.973 1+0 records out 00:07:26.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00054841 s, 7.5 MB/s 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:26.973 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.539 1+0 records in 00:07:27.539 1+0 records out 00:07:27.539 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000718695 s, 5.7 MB/s 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:27.539 17:57:43 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:07:27.798 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:27.799 1+0 records in 00:07:27.799 1+0 records out 00:07:27.799 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000756741 s, 5.4 MB/s 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:07:27.799 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:28.057 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:28.057 { 00:07:28.057 "nbd_device": "/dev/nbd0", 00:07:28.057 "bdev_name": "Nvme0n1" 00:07:28.057 }, 00:07:28.057 { 00:07:28.057 "nbd_device": "/dev/nbd1", 00:07:28.057 "bdev_name": "Nvme1n1" 00:07:28.057 }, 00:07:28.057 { 00:07:28.057 "nbd_device": "/dev/nbd2", 00:07:28.057 "bdev_name": "Nvme2n1" 00:07:28.057 }, 00:07:28.057 { 00:07:28.057 "nbd_device": "/dev/nbd3", 00:07:28.057 "bdev_name": "Nvme2n2" 00:07:28.057 }, 00:07:28.057 { 00:07:28.057 "nbd_device": "/dev/nbd4", 00:07:28.057 "bdev_name": "Nvme2n3" 00:07:28.057 }, 00:07:28.057 { 00:07:28.057 "nbd_device": "/dev/nbd5", 00:07:28.058 "bdev_name": "Nvme3n1" 00:07:28.058 } 00:07:28.058 ]' 00:07:28.058 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:28.058 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:28.058 { 00:07:28.058 "nbd_device": "/dev/nbd0", 00:07:28.058 "bdev_name": "Nvme0n1" 00:07:28.058 }, 00:07:28.058 { 00:07:28.058 "nbd_device": "/dev/nbd1", 00:07:28.058 "bdev_name": "Nvme1n1" 00:07:28.058 }, 00:07:28.058 { 00:07:28.058 "nbd_device": "/dev/nbd2", 00:07:28.058 "bdev_name": "Nvme2n1" 00:07:28.058 }, 00:07:28.058 { 00:07:28.058 "nbd_device": "/dev/nbd3", 00:07:28.058 "bdev_name": "Nvme2n2" 00:07:28.058 }, 00:07:28.058 { 00:07:28.058 "nbd_device": "/dev/nbd4", 00:07:28.058 "bdev_name": "Nvme2n3" 00:07:28.058 }, 00:07:28.058 { 00:07:28.058 "nbd_device": "/dev/nbd5", 00:07:28.058 "bdev_name": "Nvme3n1" 00:07:28.058 } 00:07:28.058 ]' 00:07:28.058 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:28.058 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:07:28.058 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:28.058 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:07:28.058 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:28.058 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:28.058 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.058 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:28.623 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:28.623 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:28.623 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:28.623 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.623 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.623 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:28.623 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.623 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.623 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.624 17:57:44 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:28.881 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:28.881 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:28.881 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:28.881 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:28.881 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:28.881 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:28.881 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:28.881 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:28.881 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:28.881 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:29.138 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:29.138 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:29.138 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:29.138 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.138 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.138 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:29.138 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:29.138 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.138 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.138 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:29.397 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:29.397 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:29.397 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:29.397 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.397 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.397 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:29.397 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:29.397 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.397 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.397 17:57:45 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:29.655 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:29.655 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:29.655 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:29.655 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:29.655 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:29.655 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:29.655 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:29.655 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:29.655 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:29.655 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:30.224 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:30.224 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:30.224 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:30.224 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:30.224 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:30.224 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:30.224 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:30.224 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:30.224 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:30.224 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.224 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:30.482 17:57:46 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:07:30.741 /dev/nbd0 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:30.741 1+0 records in 00:07:30.741 1+0 records out 00:07:30.741 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000505193 s, 8.1 MB/s 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:30.741 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:07:31.002 /dev/nbd1 00:07:31.002 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:31.002 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:31.002 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:07:31.002 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:31.002 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:31.002 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:31.002 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:07:31.002 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:31.002 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:31.002 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:31.002 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:31.259 1+0 records in 00:07:31.259 1+0 records out 00:07:31.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00053202 s, 7.7 MB/s 00:07:31.259 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.259 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:31.259 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.259 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:31.259 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:31.259 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.259 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:31.259 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:07:31.519 /dev/nbd10 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:31.519 1+0 records in 00:07:31.519 1+0 records out 00:07:31.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000567407 s, 7.2 MB/s 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:31.519 17:57:47 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:07:31.778 /dev/nbd11 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:31.778 1+0 records in 00:07:31.778 1+0 records out 00:07:31.778 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000621263 s, 6.6 MB/s 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:31.778 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:07:32.036 /dev/nbd12 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.036 1+0 records in 00:07:32.036 1+0 records out 00:07:32.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000544411 s, 7.5 MB/s 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:32.036 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:07:32.603 /dev/nbd13 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:32.603 1+0 records in 00:07:32.603 1+0 records out 00:07:32.603 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000616333 s, 6.6 MB/s 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:32.603 17:57:48 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd0", 00:07:32.861 "bdev_name": "Nvme0n1" 00:07:32.861 }, 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd1", 00:07:32.861 "bdev_name": "Nvme1n1" 00:07:32.861 }, 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd10", 00:07:32.861 "bdev_name": "Nvme2n1" 00:07:32.861 }, 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd11", 00:07:32.861 "bdev_name": "Nvme2n2" 00:07:32.861 }, 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd12", 00:07:32.861 "bdev_name": "Nvme2n3" 00:07:32.861 }, 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd13", 00:07:32.861 "bdev_name": "Nvme3n1" 00:07:32.861 } 00:07:32.861 ]' 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd0", 00:07:32.861 "bdev_name": "Nvme0n1" 00:07:32.861 }, 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd1", 00:07:32.861 "bdev_name": "Nvme1n1" 00:07:32.861 }, 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd10", 00:07:32.861 "bdev_name": "Nvme2n1" 00:07:32.861 }, 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd11", 00:07:32.861 "bdev_name": "Nvme2n2" 00:07:32.861 }, 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd12", 00:07:32.861 "bdev_name": "Nvme2n3" 00:07:32.861 }, 00:07:32.861 { 00:07:32.861 "nbd_device": "/dev/nbd13", 00:07:32.861 "bdev_name": "Nvme3n1" 00:07:32.861 } 00:07:32.861 ]' 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:32.861 /dev/nbd1 00:07:32.861 /dev/nbd10 00:07:32.861 /dev/nbd11 00:07:32.861 /dev/nbd12 00:07:32.861 /dev/nbd13' 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:32.861 /dev/nbd1 00:07:32.861 /dev/nbd10 00:07:32.861 /dev/nbd11 00:07:32.861 /dev/nbd12 00:07:32.861 /dev/nbd13' 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:32.861 256+0 records in 00:07:32.861 256+0 records out 00:07:32.861 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00767929 s, 137 MB/s 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:32.861 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:33.119 256+0 records in 00:07:33.119 256+0 records out 00:07:33.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135819 s, 7.7 MB/s 00:07:33.119 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.119 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:33.119 256+0 records in 00:07:33.119 256+0 records out 00:07:33.119 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.128805 s, 8.1 MB/s 00:07:33.119 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.119 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:33.383 256+0 records in 00:07:33.383 256+0 records out 00:07:33.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.156766 s, 6.7 MB/s 00:07:33.383 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.383 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:33.383 256+0 records in 00:07:33.383 256+0 records out 00:07:33.383 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.149773 s, 7.0 MB/s 00:07:33.383 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.383 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:33.641 256+0 records in 00:07:33.641 256+0 records out 00:07:33.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148013 s, 7.1 MB/s 00:07:33.641 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:33.641 17:57:49 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:33.641 256+0 records in 00:07:33.641 256+0 records out 00:07:33.641 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.135498 s, 7.7 MB/s 00:07:33.641 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:07:33.641 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:33.641 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:33.641 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:33.641 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:33.641 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:33.641 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:33.641 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.641 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:33.641 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.641 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:33.899 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:33.900 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:34.158 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:34.158 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:34.158 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:34.158 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.158 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.158 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:34.158 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:34.158 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.158 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.158 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:34.416 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:34.416 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:34.416 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:34.416 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.416 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.416 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:34.416 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:34.416 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.416 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.416 17:57:50 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:34.675 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:34.675 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:34.675 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:34.675 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:34.675 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:34.675 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:34.675 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:34.675 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:34.675 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:34.675 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:35.241 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:35.241 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:35.241 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:35.241 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.241 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.241 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:35.241 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.241 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.241 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.241 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:35.499 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:35.499 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:35.499 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:35.499 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.499 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.499 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:35.499 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.499 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.499 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:35.499 17:57:51 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:35.757 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:35.757 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:35.757 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:35.757 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:35.757 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:35.757 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:35.757 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:35.757 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:35.757 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:35.757 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:35.757 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:07:36.015 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:07:36.273 malloc_lvol_verify 00:07:36.273 17:57:52 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:07:36.840 47a818a8-f366-4828-b58d-82ecdeef2f67 00:07:36.840 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:07:37.098 9897a1bf-be8e-4bc9-9214-0d01e4b0ade4 00:07:37.098 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:07:37.357 /dev/nbd0 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:07:37.357 mke2fs 1.47.0 (5-Feb-2023) 00:07:37.357 Discarding device blocks: 0/4096 done 00:07:37.357 Creating filesystem with 4096 1k blocks and 1024 inodes 00:07:37.357 00:07:37.357 Allocating group tables: 0/1 done 00:07:37.357 Writing inode tables: 0/1 done 00:07:37.357 Creating journal (1024 blocks): done 00:07:37.357 Writing superblocks and filesystem accounting information: 0/1 done 00:07:37.357 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:37.357 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61188 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 61188 ']' 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 61188 00:07:37.614 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:07:37.615 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:37.615 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61188 00:07:37.615 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:37.615 killing process with pid 61188 00:07:37.615 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:37.615 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61188' 00:07:37.615 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 61188 00:07:37.615 17:57:53 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 61188 00:07:38.987 ************************************ 00:07:38.987 END TEST bdev_nbd 00:07:38.987 ************************************ 00:07:38.987 17:57:55 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:07:38.987 00:07:38.987 real 0m14.073s 00:07:38.987 user 0m20.624s 00:07:38.987 sys 0m4.292s 00:07:38.987 17:57:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:38.987 17:57:55 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:07:38.987 17:57:55 blockdev_nvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:07:38.987 17:57:55 blockdev_nvme -- bdev/blockdev.sh@763 -- # '[' nvme = nvme ']' 00:07:38.987 skipping fio tests on NVMe due to multi-ns failures. 00:07:38.987 17:57:55 blockdev_nvme -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:07:38.987 17:57:55 blockdev_nvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:38.987 17:57:55 blockdev_nvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:38.987 17:57:55 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:38.987 17:57:55 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:38.987 17:57:55 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:38.987 ************************************ 00:07:38.987 START TEST bdev_verify 00:07:38.987 ************************************ 00:07:38.987 17:57:55 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:07:38.987 [2024-10-28 17:57:55.197764] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:38.987 [2024-10-28 17:57:55.198004] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61612 ] 00:07:38.987 [2024-10-28 17:57:55.382004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:39.245 [2024-10-28 17:57:55.486500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.245 [2024-10-28 17:57:55.486511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.812 Running I/O for 5 seconds... 00:07:42.121 19712.00 IOPS, 77.00 MiB/s [2024-10-28T17:57:59.617Z] 18496.00 IOPS, 72.25 MiB/s [2024-10-28T17:58:00.552Z] 18944.00 IOPS, 74.00 MiB/s [2024-10-28T17:58:01.485Z] 19600.00 IOPS, 76.56 MiB/s [2024-10-28T17:58:01.485Z] 19328.00 IOPS, 75.50 MiB/s 00:07:45.007 Latency(us) 00:07:45.007 [2024-10-28T17:58:01.485Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:45.007 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0x0 length 0xbd0bd 00:07:45.007 Nvme0n1 : 5.05 1571.34 6.14 0.00 0.00 81185.19 17992.61 72447.07 00:07:45.007 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:07:45.007 Nvme0n1 : 5.04 1600.72 6.25 0.00 0.00 79671.76 16801.05 75783.45 00:07:45.007 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0x0 length 0xa0000 00:07:45.007 Nvme1n1 : 5.05 1570.75 6.14 0.00 0.00 81055.59 19660.80 71017.19 00:07:45.007 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0xa0000 length 0xa0000 00:07:45.007 Nvme1n1 : 5.06 1605.72 6.27 0.00 0.00 79246.38 6166.34 70540.57 00:07:45.007 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0x0 length 0x80000 00:07:45.007 Nvme2n1 : 5.05 1570.18 6.13 0.00 0.00 80933.30 20256.58 70540.57 00:07:45.007 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0x80000 length 0x80000 00:07:45.007 Nvme2n1 : 5.06 1605.23 6.27 0.00 0.00 79070.60 6285.50 68634.07 00:07:45.007 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0x0 length 0x80000 00:07:45.007 Nvme2n2 : 5.06 1569.55 6.13 0.00 0.00 80810.89 17277.67 67680.81 00:07:45.007 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0x80000 length 0x80000 00:07:45.007 Nvme2n2 : 5.08 1613.64 6.30 0.00 0.00 78655.72 10307.03 70540.57 00:07:45.007 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0x0 length 0x80000 00:07:45.007 Nvme2n3 : 5.10 1581.33 6.18 0.00 0.00 80184.18 15728.64 70540.57 00:07:45.007 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0x80000 length 0x80000 00:07:45.007 Nvme2n3 : 5.08 1613.15 6.30 0.00 0.00 78524.89 10604.92 71493.82 00:07:45.007 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0x0 length 0x20000 00:07:45.007 Nvme3n1 : 5.10 1580.77 6.17 0.00 0.00 80049.36 9949.56 73400.32 00:07:45.007 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:07:45.007 Verification LBA range: start 0x20000 length 0x20000 00:07:45.008 Nvme3n1 : 5.08 1612.67 6.30 0.00 0.00 78393.91 10724.07 74830.20 00:07:45.008 [2024-10-28T17:58:01.486Z] =================================================================================================================== 00:07:45.008 [2024-10-28T17:58:01.486Z] Total : 19095.05 74.59 0.00 0.00 79802.91 6166.34 75783.45 00:07:46.383 00:07:46.383 real 0m7.506s 00:07:46.383 user 0m13.869s 00:07:46.383 sys 0m0.274s 00:07:46.383 17:58:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:46.383 17:58:02 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:07:46.383 ************************************ 00:07:46.383 END TEST bdev_verify 00:07:46.383 ************************************ 00:07:46.383 17:58:02 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:46.383 17:58:02 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:07:46.383 17:58:02 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:46.383 17:58:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:46.383 ************************************ 00:07:46.383 START TEST bdev_verify_big_io 00:07:46.383 ************************************ 00:07:46.383 17:58:02 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:07:46.383 [2024-10-28 17:58:02.735075] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:46.383 [2024-10-28 17:58:02.735223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61710 ] 00:07:46.655 [2024-10-28 17:58:02.911399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:46.655 [2024-10-28 17:58:03.057049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.655 [2024-10-28 17:58:03.057057] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.613 Running I/O for 5 seconds... 00:07:53.169 1904.00 IOPS, 119.00 MiB/s [2024-10-28T17:58:09.904Z] 2622.50 IOPS, 163.91 MiB/s [2024-10-28T17:58:10.162Z] 2941.67 IOPS, 183.85 MiB/s 00:07:53.684 Latency(us) 00:07:53.684 [2024-10-28T17:58:10.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.684 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0x0 length 0xbd0b 00:07:53.684 Nvme0n1 : 5.71 123.34 7.71 0.00 0.00 1003880.60 16562.73 941811.90 00:07:53.684 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:53.684 Nvme0n1 : 5.71 112.10 7.01 0.00 0.00 1086814.02 16920.20 1258291.20 00:07:53.684 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0x0 length 0xa000 00:07:53.684 Nvme1n1 : 5.71 123.25 7.70 0.00 0.00 975872.34 82932.83 869364.83 00:07:53.684 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0xa000 length 0xa000 00:07:53.684 Nvme1n1 : 5.85 113.94 7.12 0.00 0.00 1027522.66 113913.48 1029510.98 00:07:53.684 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0x0 length 0x8000 00:07:53.684 Nvme2n1 : 5.82 127.24 7.95 0.00 0.00 920864.02 42657.98 907494.87 00:07:53.684 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0x8000 length 0x8000 00:07:53.684 Nvme2n1 : 5.89 119.46 7.47 0.00 0.00 956218.82 35031.97 831234.79 00:07:53.684 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0x0 length 0x8000 00:07:53.684 Nvme2n2 : 5.76 127.76 7.98 0.00 0.00 898569.47 42657.98 945624.90 00:07:53.684 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0x8000 length 0x8000 00:07:53.684 Nvme2n2 : 5.96 124.93 7.81 0.00 0.00 878186.40 26571.87 865551.83 00:07:53.684 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0x0 length 0x8000 00:07:53.684 Nvme2n3 : 5.83 131.77 8.24 0.00 0.00 847137.82 63391.19 976128.93 00:07:53.684 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0x8000 length 0x8000 00:07:53.684 Nvme2n3 : 5.99 126.86 7.93 0.00 0.00 842111.77 12749.73 1937005.85 00:07:53.684 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0x0 length 0x2000 00:07:53.684 Nvme3n1 : 5.89 148.05 9.25 0.00 0.00 737308.22 7983.48 983754.94 00:07:53.684 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:53.684 Verification LBA range: start 0x2000 length 0x2000 00:07:53.684 Nvme3n1 : 6.07 164.54 10.28 0.00 0.00 629374.33 614.40 1998013.91 00:07:53.684 [2024-10-28T17:58:10.162Z] =================================================================================================================== 00:07:53.684 [2024-10-28T17:58:10.162Z] Total : 1543.24 96.45 0.00 0.00 885879.17 614.40 1998013.91 00:07:55.060 00:07:55.060 real 0m8.838s 00:07:55.060 user 0m16.512s 00:07:55.060 sys 0m0.281s 00:07:55.060 17:58:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:55.060 17:58:11 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 ************************************ 00:07:55.060 END TEST bdev_verify_big_io 00:07:55.060 ************************************ 00:07:55.060 17:58:11 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:55.060 17:58:11 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:55.060 17:58:11 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:55.060 17:58:11 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:55.060 ************************************ 00:07:55.060 START TEST bdev_write_zeroes 00:07:55.060 ************************************ 00:07:55.060 17:58:11 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:55.327 [2024-10-28 17:58:11.660005] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:55.327 [2024-10-28 17:58:11.660159] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61825 ] 00:07:55.605 [2024-10-28 17:58:11.843023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.605 [2024-10-28 17:58:11.976435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.172 Running I/O for 1 seconds... 00:07:57.544 50304.00 IOPS, 196.50 MiB/s 00:07:57.544 Latency(us) 00:07:57.544 [2024-10-28T17:58:14.022Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:57.544 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:57.544 Nvme0n1 : 1.03 8353.81 32.63 0.00 0.00 15283.25 7328.12 29669.93 00:07:57.544 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:57.544 Nvme1n1 : 1.03 8340.78 32.58 0.00 0.00 15285.30 11915.64 28835.84 00:07:57.544 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:57.544 Nvme2n1 : 1.03 8328.28 32.53 0.00 0.00 15224.12 10604.92 27644.28 00:07:57.544 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:57.545 Nvme2n2 : 1.03 8315.70 32.48 0.00 0.00 15219.28 10307.03 26929.34 00:07:57.545 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:57.545 Nvme2n3 : 1.03 8302.66 32.43 0.00 0.00 15209.22 9711.24 27644.28 00:07:57.545 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:57.545 Nvme3n1 : 1.03 8290.24 32.38 0.00 0.00 15204.50 10009.13 29908.25 00:07:57.545 [2024-10-28T17:58:14.023Z] =================================================================================================================== 00:07:57.545 [2024-10-28T17:58:14.023Z] Total : 49931.46 195.04 0.00 0.00 15237.61 7328.12 29908.25 00:07:58.479 00:07:58.479 real 0m3.268s 00:07:58.479 user 0m2.901s 00:07:58.479 sys 0m0.242s 00:07:58.479 17:58:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:58.479 17:58:14 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:58.479 ************************************ 00:07:58.479 END TEST bdev_write_zeroes 00:07:58.479 ************************************ 00:07:58.479 17:58:14 blockdev_nvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:58.479 17:58:14 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:58.479 17:58:14 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:58.479 17:58:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:58.479 ************************************ 00:07:58.479 START TEST bdev_json_nonenclosed 00:07:58.479 ************************************ 00:07:58.479 17:58:14 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:58.738 [2024-10-28 17:58:14.962265] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:58.738 [2024-10-28 17:58:14.962429] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61878 ] 00:07:58.738 [2024-10-28 17:58:15.151227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.996 [2024-10-28 17:58:15.278755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.996 [2024-10-28 17:58:15.278896] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:58.996 [2024-10-28 17:58:15.278942] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:58.996 [2024-10-28 17:58:15.278964] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.254 00:07:59.254 real 0m0.693s 00:07:59.254 user 0m0.459s 00:07:59.254 sys 0m0.128s 00:07:59.254 17:58:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:59.254 17:58:15 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:59.254 ************************************ 00:07:59.254 END TEST bdev_json_nonenclosed 00:07:59.254 ************************************ 00:07:59.254 17:58:15 blockdev_nvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:59.254 17:58:15 blockdev_nvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:07:59.254 17:58:15 blockdev_nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:59.254 17:58:15 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:07:59.254 ************************************ 00:07:59.254 START TEST bdev_json_nonarray 00:07:59.254 ************************************ 00:07:59.254 17:58:15 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:59.254 [2024-10-28 17:58:15.702632] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:07:59.255 [2024-10-28 17:58:15.702804] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61909 ] 00:07:59.513 [2024-10-28 17:58:15.889011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.772 [2024-10-28 17:58:16.013583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.772 [2024-10-28 17:58:16.013710] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:59.772 [2024-10-28 17:58:16.013743] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:59.772 [2024-10-28 17:58:16.013759] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:00.030 00:08:00.030 real 0m0.683s 00:08:00.030 user 0m0.448s 00:08:00.030 sys 0m0.129s 00:08:00.030 17:58:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.031 17:58:16 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:00.031 ************************************ 00:08:00.031 END TEST bdev_json_nonarray 00:08:00.031 ************************************ 00:08:00.031 17:58:16 blockdev_nvme -- bdev/blockdev.sh@786 -- # [[ nvme == bdev ]] 00:08:00.031 17:58:16 blockdev_nvme -- bdev/blockdev.sh@793 -- # [[ nvme == gpt ]] 00:08:00.031 17:58:16 blockdev_nvme -- bdev/blockdev.sh@797 -- # [[ nvme == crypto_sw ]] 00:08:00.031 17:58:16 blockdev_nvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:08:00.031 17:58:16 blockdev_nvme -- bdev/blockdev.sh@810 -- # cleanup 00:08:00.031 17:58:16 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:08:00.031 17:58:16 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:00.031 17:58:16 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:08:00.031 17:58:16 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:08:00.031 17:58:16 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:08:00.031 17:58:16 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:08:00.031 00:08:00.031 real 0m44.348s 00:08:00.031 user 1m8.065s 00:08:00.031 sys 0m6.828s 00:08:00.031 17:58:16 blockdev_nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:00.031 ************************************ 00:08:00.031 17:58:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:08:00.031 END TEST blockdev_nvme 00:08:00.031 ************************************ 00:08:00.031 17:58:16 -- spdk/autotest.sh@209 -- # uname -s 00:08:00.031 17:58:16 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:08:00.031 17:58:16 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:00.031 17:58:16 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:00.031 17:58:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:00.031 17:58:16 -- common/autotest_common.sh@10 -- # set +x 00:08:00.031 ************************************ 00:08:00.031 START TEST blockdev_nvme_gpt 00:08:00.031 ************************************ 00:08:00.031 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:08:00.031 * Looking for test storage... 00:08:00.031 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:00.031 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:00.031 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lcov --version 00:08:00.031 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:00.290 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:00.290 17:58:16 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:08:00.290 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:00.290 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:00.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.290 --rc genhtml_branch_coverage=1 00:08:00.290 --rc genhtml_function_coverage=1 00:08:00.290 --rc genhtml_legend=1 00:08:00.290 --rc geninfo_all_blocks=1 00:08:00.290 --rc geninfo_unexecuted_blocks=1 00:08:00.290 00:08:00.290 ' 00:08:00.290 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:00.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.290 --rc genhtml_branch_coverage=1 00:08:00.290 --rc genhtml_function_coverage=1 00:08:00.290 --rc genhtml_legend=1 00:08:00.290 --rc geninfo_all_blocks=1 00:08:00.290 --rc geninfo_unexecuted_blocks=1 00:08:00.290 00:08:00.290 ' 00:08:00.290 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:00.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.290 --rc genhtml_branch_coverage=1 00:08:00.290 --rc genhtml_function_coverage=1 00:08:00.290 --rc genhtml_legend=1 00:08:00.290 --rc geninfo_all_blocks=1 00:08:00.290 --rc geninfo_unexecuted_blocks=1 00:08:00.290 00:08:00.290 ' 00:08:00.290 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:00.290 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:00.290 --rc genhtml_branch_coverage=1 00:08:00.290 --rc genhtml_function_coverage=1 00:08:00.290 --rc genhtml_legend=1 00:08:00.290 --rc geninfo_all_blocks=1 00:08:00.290 --rc geninfo_unexecuted_blocks=1 00:08:00.290 00:08:00.290 ' 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # uname -s 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@681 -- # test_type=gpt 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@682 -- # crypto_device= 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@683 -- # dek= 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@684 -- # env_ctx= 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == bdev ]] 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@689 -- # [[ gpt == crypto_* ]] 00:08:00.290 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:08:00.291 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61993 00:08:00.291 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:00.291 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 61993 00:08:00.291 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@833 -- # '[' -z 61993 ']' 00:08:00.291 17:58:16 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:00.291 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.291 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:00.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.291 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.291 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:00.291 17:58:16 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:00.291 [2024-10-28 17:58:16.716536] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:08:00.291 [2024-10-28 17:58:16.716711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61993 ] 00:08:00.549 [2024-10-28 17:58:16.904053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.807 [2024-10-28 17:58:17.050202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.374 17:58:17 blockdev_nvme_gpt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:01.374 17:58:17 blockdev_nvme_gpt -- common/autotest_common.sh@866 -- # return 0 00:08:01.374 17:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:08:01.374 17:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@701 -- # setup_gpt_conf 00:08:01.374 17:58:17 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:08:01.942 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:01.942 Waiting for block devices as requested 00:08:01.942 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:08:02.200 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:08:02.200 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:08:02.200 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:08:07.540 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1656 -- # local nvme bdf 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:08:07.540 BYT; 00:08:07.540 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:08:07.540 BYT; 00:08:07.540 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:08:07.540 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:07.541 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:08:07.541 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:07.541 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:08:07.541 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:07.541 17:58:23 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:07.541 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:08:07.541 17:58:23 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:08:08.474 The operation has completed successfully. 00:08:08.474 17:58:24 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:08:09.409 The operation has completed successfully. 00:08:09.409 17:58:25 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:08:09.975 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:08:10.541 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:08:10.541 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:08:10.541 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:08:10.799 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:08:10.799 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:08:10.799 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.799 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:10.799 [] 00:08:10.799 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:10.799 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:08:10.799 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:08:10.799 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:08:10.799 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:10.799 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:08:10.799 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.799 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.057 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.057 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # cat 00:08:11.057 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.057 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.057 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.057 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.316 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.316 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:08:11.316 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:08:11.316 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:08:11.316 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.316 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:11.316 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.316 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:08:11.316 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # jq -r .name 00:08:11.317 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c2c15cb9-310f-4f65-8bb3-c89f9e134991"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c2c15cb9-310f-4f65-8bb3-c89f9e134991",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "2036fa43-a395-4063-bd8e-a9f8150aa7eb"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "2036fa43-a395-4063-bd8e-a9f8150aa7eb",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "1a222a60-12a2-48df-afff-9dd6f3e5d1c5"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "1a222a60-12a2-48df-afff-9dd6f3e5d1c5",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "9bba20bc-1904-453f-8953-1f227ce6b368"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "9bba20bc-1904-453f-8953-1f227ce6b368",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3d9ded81-6f91-4eba-9ebf-e17d173d98f1"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3d9ded81-6f91-4eba-9ebf-e17d173d98f1",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:08:11.317 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:08:11.317 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@751 -- # hello_world_bdev=Nvme0n1 00:08:11.317 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:08:11.317 17:58:27 blockdev_nvme_gpt -- bdev/blockdev.sh@753 -- # killprocess 61993 00:08:11.317 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@952 -- # '[' -z 61993 ']' 00:08:11.317 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@956 -- # kill -0 61993 00:08:11.317 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # uname 00:08:11.317 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:11.317 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 61993 00:08:11.317 killing process with pid 61993 00:08:11.317 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:11.317 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:11.317 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 61993' 00:08:11.317 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@971 -- # kill 61993 00:08:11.317 17:58:27 blockdev_nvme_gpt -- common/autotest_common.sh@976 -- # wait 61993 00:08:13.847 17:58:29 blockdev_nvme_gpt -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:13.847 17:58:29 blockdev_nvme_gpt -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:13.847 17:58:29 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:08:13.847 17:58:29 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:13.847 17:58:29 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:13.847 ************************************ 00:08:13.847 START TEST bdev_hello_world 00:08:13.847 ************************************ 00:08:13.847 17:58:29 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:08:13.847 [2024-10-28 17:58:29.929069] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:08:13.847 [2024-10-28 17:58:29.929254] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62627 ] 00:08:13.847 [2024-10-28 17:58:30.118004] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.847 [2024-10-28 17:58:30.243146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.414 [2024-10-28 17:58:30.874082] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:08:14.414 [2024-10-28 17:58:30.874137] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:08:14.414 [2024-10-28 17:58:30.874169] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:08:14.414 [2024-10-28 17:58:30.879909] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:08:14.414 [2024-10-28 17:58:30.880604] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:08:14.414 [2024-10-28 17:58:30.880674] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:08:14.414 [2024-10-28 17:58:30.880955] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:08:14.414 00:08:14.414 [2024-10-28 17:58:30.881012] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:08:15.789 00:08:15.789 real 0m2.096s 00:08:15.789 user 0m1.769s 00:08:15.789 sys 0m0.214s 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:15.789 ************************************ 00:08:15.789 END TEST bdev_hello_world 00:08:15.789 ************************************ 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:08:15.789 17:58:31 blockdev_nvme_gpt -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:08:15.789 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:08:15.789 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:15.789 17:58:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:15.789 ************************************ 00:08:15.789 START TEST bdev_bounds 00:08:15.789 ************************************ 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=62669 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:08:15.789 Process bdevio pid: 62669 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 62669' 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 62669 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 62669 ']' 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:15.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:15.789 17:58:31 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:15.789 [2024-10-28 17:58:32.078724] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:08:15.789 [2024-10-28 17:58:32.078959] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62669 ] 00:08:16.046 [2024-10-28 17:58:32.268053] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:16.046 [2024-10-28 17:58:32.423402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.046 [2024-10-28 17:58:32.423556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.046 [2024-10-28 17:58:32.423556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.612 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:16.612 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:08:16.612 17:58:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:08:16.870 I/O targets: 00:08:16.870 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:08:16.870 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:08:16.870 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:08:16.870 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:16.870 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:16.870 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:08:16.870 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:08:16.870 00:08:16.870 00:08:16.870 CUnit - A unit testing framework for C - Version 2.1-3 00:08:16.870 http://cunit.sourceforge.net/ 00:08:16.870 00:08:16.870 00:08:16.870 Suite: bdevio tests on: Nvme3n1 00:08:16.870 Test: blockdev write read block ...passed 00:08:16.870 Test: blockdev write zeroes read block ...passed 00:08:16.870 Test: blockdev write zeroes read no split ...passed 00:08:16.870 Test: blockdev write zeroes read split ...passed 00:08:16.870 Test: blockdev write zeroes read split partial ...passed 00:08:16.870 Test: blockdev reset ...[2024-10-28 17:58:33.278531] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:08:16.870 [2024-10-28 17:58:33.282330] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:08:16.870 passed 00:08:16.870 Test: blockdev write read 8 blocks ...passed 00:08:16.870 Test: blockdev write read size > 128k ...passed 00:08:16.870 Test: blockdev write read invalid size ...passed 00:08:16.870 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:16.870 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:16.870 Test: blockdev write read max offset ...passed 00:08:16.870 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:16.870 Test: blockdev writev readv 8 blocks ...passed 00:08:16.870 Test: blockdev writev readv 30 x 1block ...passed 00:08:16.870 Test: blockdev writev readv block ...passed 00:08:16.870 Test: blockdev writev readv size > 128k ...passed 00:08:16.870 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:16.870 Test: blockdev comparev and writev ...[2024-10-28 17:58:33.290215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9604000 len:0x1000 00:08:16.870 passed 00:08:16.870 Test: blockdev nvme passthru rw ...[2024-10-28 17:58:33.290283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:16.870 passed 00:08:16.870 Test: blockdev nvme passthru vendor specific ...[2024-10-28 17:58:33.291113] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:16.870 passed 00:08:16.870 Test: blockdev nvme admin passthru ...[2024-10-28 17:58:33.291163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:16.870 passed 00:08:16.870 Test: blockdev copy ...passed 00:08:16.870 Suite: bdevio tests on: Nvme2n3 00:08:16.870 Test: blockdev write read block ...passed 00:08:16.870 Test: blockdev write zeroes read block ...passed 00:08:16.870 Test: blockdev write zeroes read no split ...passed 00:08:16.870 Test: blockdev write zeroes read split ...passed 00:08:17.129 Test: blockdev write zeroes read split partial ...passed 00:08:17.129 Test: blockdev reset ...[2024-10-28 17:58:33.357754] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:17.129 [2024-10-28 17:58:33.361869] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:17.129 passed 00:08:17.129 Test: blockdev write read 8 blocks ...passed 00:08:17.129 Test: blockdev write read size > 128k ...passed 00:08:17.129 Test: blockdev write read invalid size ...passed 00:08:17.129 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:17.129 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:17.129 Test: blockdev write read max offset ...passed 00:08:17.129 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:17.129 Test: blockdev writev readv 8 blocks ...passed 00:08:17.129 Test: blockdev writev readv 30 x 1block ...passed 00:08:17.129 Test: blockdev writev readv block ...passed 00:08:17.129 Test: blockdev writev readv size > 128k ...passed 00:08:17.129 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:17.129 Test: blockdev comparev and writev ...[2024-10-28 17:58:33.369499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2c9602000 len:0x1000 00:08:17.129 [2024-10-28 17:58:33.369557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:17.129 passed 00:08:17.129 Test: blockdev nvme passthru rw ...passed 00:08:17.129 Test: blockdev nvme passthru vendor specific ...passed 00:08:17.129 Test: blockdev nvme admin passthru ...[2024-10-28 17:58:33.370375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:17.129 [2024-10-28 17:58:33.370417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:17.129 passed 00:08:17.129 Test: blockdev copy ...passed 00:08:17.129 Suite: bdevio tests on: Nvme2n2 00:08:17.129 Test: blockdev write read block ...passed 00:08:17.129 Test: blockdev write zeroes read block ...passed 00:08:17.129 Test: blockdev write zeroes read no split ...passed 00:08:17.129 Test: blockdev write zeroes read split ...passed 00:08:17.129 Test: blockdev write zeroes read split partial ...passed 00:08:17.129 Test: blockdev reset ...[2024-10-28 17:58:33.436713] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:17.129 [2024-10-28 17:58:33.440978] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:17.129 passed 00:08:17.129 Test: blockdev write read 8 blocks ...passed 00:08:17.129 Test: blockdev write read size > 128k ...passed 00:08:17.129 Test: blockdev write read invalid size ...passed 00:08:17.129 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:17.129 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:17.129 Test: blockdev write read max offset ...passed 00:08:17.129 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:17.129 Test: blockdev writev readv 8 blocks ...passed 00:08:17.129 Test: blockdev writev readv 30 x 1block ...passed 00:08:17.129 Test: blockdev writev readv block ...passed 00:08:17.129 Test: blockdev writev readv size > 128k ...passed 00:08:17.129 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:17.129 Test: blockdev comparev and writev ...[2024-10-28 17:58:33.448404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dbc38000 len:0x1000 00:08:17.129 [2024-10-28 17:58:33.448462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:17.129 passed 00:08:17.129 Test: blockdev nvme passthru rw ...passed 00:08:17.129 Test: blockdev nvme passthru vendor specific ...[2024-10-28 17:58:33.449227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:17.129 passed 00:08:17.129 Test: blockdev nvme admin passthru ...[2024-10-28 17:58:33.449270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:17.129 passed 00:08:17.129 Test: blockdev copy ...passed 00:08:17.129 Suite: bdevio tests on: Nvme2n1 00:08:17.129 Test: blockdev write read block ...passed 00:08:17.129 Test: blockdev write zeroes read block ...passed 00:08:17.129 Test: blockdev write zeroes read no split ...passed 00:08:17.129 Test: blockdev write zeroes read split ...passed 00:08:17.129 Test: blockdev write zeroes read split partial ...passed 00:08:17.129 Test: blockdev reset ...[2024-10-28 17:58:33.514613] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:08:17.129 [2024-10-28 17:58:33.519652] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:08:17.129 passed 00:08:17.129 Test: blockdev write read 8 blocks ...passed 00:08:17.129 Test: blockdev write read size > 128k ...passed 00:08:17.129 Test: blockdev write read invalid size ...passed 00:08:17.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:17.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:17.130 Test: blockdev write read max offset ...passed 00:08:17.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:17.130 Test: blockdev writev readv 8 blocks ...passed 00:08:17.130 Test: blockdev writev readv 30 x 1block ...passed 00:08:17.130 Test: blockdev writev readv block ...passed 00:08:17.130 Test: blockdev writev readv size > 128k ...passed 00:08:17.130 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:17.130 Test: blockdev comparev and writev ...[2024-10-28 17:58:33.527737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dbc34000 len:0x1000 00:08:17.130 [2024-10-28 17:58:33.527799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:17.130 passed 00:08:17.130 Test: blockdev nvme passthru rw ...passed 00:08:17.130 Test: blockdev nvme passthru vendor specific ...passed 00:08:17.130 Test: blockdev nvme admin passthru ...[2024-10-28 17:58:33.528848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:08:17.130 [2024-10-28 17:58:33.528892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:08:17.130 passed 00:08:17.130 Test: blockdev copy ...passed 00:08:17.130 Suite: bdevio tests on: Nvme1n1p2 00:08:17.130 Test: blockdev write read block ...passed 00:08:17.130 Test: blockdev write zeroes read block ...passed 00:08:17.130 Test: blockdev write zeroes read no split ...passed 00:08:17.130 Test: blockdev write zeroes read split ...passed 00:08:17.130 Test: blockdev write zeroes read split partial ...passed 00:08:17.130 Test: blockdev reset ...[2024-10-28 17:58:33.597065] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:17.130 [2024-10-28 17:58:33.601030] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:17.130 passed 00:08:17.130 Test: blockdev write read 8 blocks ...passed 00:08:17.130 Test: blockdev write read size > 128k ...passed 00:08:17.130 Test: blockdev write read invalid size ...passed 00:08:17.130 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:17.130 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:17.130 Test: blockdev write read max offset ...passed 00:08:17.130 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:17.130 Test: blockdev writev readv 8 blocks ...passed 00:08:17.388 Test: blockdev writev readv 30 x 1block ...passed 00:08:17.388 Test: blockdev writev readv block ...passed 00:08:17.388 Test: blockdev writev readv size > 128k ...passed 00:08:17.388 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:17.388 Test: blockdev comparev and writev ...[2024-10-28 17:58:33.608898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dbc30000 len:0x1000 00:08:17.388 [2024-10-28 17:58:33.608959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:17.388 passed 00:08:17.388 Test: blockdev nvme passthru rw ...passed 00:08:17.388 Test: blockdev nvme passthru vendor specific ...passed 00:08:17.388 Test: blockdev nvme admin passthru ...passed 00:08:17.388 Test: blockdev copy ...passed 00:08:17.388 Suite: bdevio tests on: Nvme1n1p1 00:08:17.388 Test: blockdev write read block ...passed 00:08:17.388 Test: blockdev write zeroes read block ...passed 00:08:17.388 Test: blockdev write zeroes read no split ...passed 00:08:17.388 Test: blockdev write zeroes read split ...passed 00:08:17.388 Test: blockdev write zeroes read split partial ...passed 00:08:17.388 Test: blockdev reset ...[2024-10-28 17:58:33.666618] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:08:17.388 [2024-10-28 17:58:33.670418] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:08:17.388 passed 00:08:17.388 Test: blockdev write read 8 blocks ...passed 00:08:17.388 Test: blockdev write read size > 128k ...passed 00:08:17.388 Test: blockdev write read invalid size ...passed 00:08:17.388 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:17.389 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:17.389 Test: blockdev write read max offset ...passed 00:08:17.389 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:17.389 Test: blockdev writev readv 8 blocks ...passed 00:08:17.389 Test: blockdev writev readv 30 x 1block ...passed 00:08:17.389 Test: blockdev writev readv block ...passed 00:08:17.389 Test: blockdev writev readv size > 128k ...passed 00:08:17.389 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:17.389 Test: blockdev comparev and writev ...[2024-10-28 17:58:33.677851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2c980e000 len:0x1000 00:08:17.389 [2024-10-28 17:58:33.677909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:08:17.389 passed 00:08:17.389 Test: blockdev nvme passthru rw ...passed 00:08:17.389 Test: blockdev nvme passthru vendor specific ...passed 00:08:17.389 Test: blockdev nvme admin passthru ...passed 00:08:17.389 Test: blockdev copy ...passed 00:08:17.389 Suite: bdevio tests on: Nvme0n1 00:08:17.389 Test: blockdev write read block ...passed 00:08:17.389 Test: blockdev write zeroes read block ...passed 00:08:17.389 Test: blockdev write zeroes read no split ...passed 00:08:17.389 Test: blockdev write zeroes read split ...passed 00:08:17.389 Test: blockdev write zeroes read split partial ...passed 00:08:17.389 Test: blockdev reset ...[2024-10-28 17:58:33.742120] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:17.389 [2024-10-28 17:58:33.745616] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:17.389 passed 00:08:17.389 Test: blockdev write read 8 blocks ...passed 00:08:17.389 Test: blockdev write read size > 128k ...passed 00:08:17.389 Test: blockdev write read invalid size ...passed 00:08:17.389 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:17.389 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:17.389 Test: blockdev write read max offset ...passed 00:08:17.389 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:17.389 Test: blockdev writev readv 8 blocks ...passed 00:08:17.389 Test: blockdev writev readv 30 x 1block ...passed 00:08:17.389 Test: blockdev writev readv block ...passed 00:08:17.389 Test: blockdev writev readv size > 128k ...passed 00:08:17.389 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:17.389 Test: blockdev comparev and writev ...[2024-10-28 17:58:33.752830] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:08:17.389 separate metadata which is not supported yet. 00:08:17.389 passed 00:08:17.389 Test: blockdev nvme passthru rw ...passed 00:08:17.389 Test: blockdev nvme passthru vendor specific ...[2024-10-28 17:58:33.753470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:08:17.389 [2024-10-28 17:58:33.753531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:08:17.389 passed 00:08:17.389 Test: blockdev nvme admin passthru ...passed 00:08:17.389 Test: blockdev copy ...passed 00:08:17.389 00:08:17.389 Run Summary: Type Total Ran Passed Failed Inactive 00:08:17.389 suites 7 7 n/a 0 0 00:08:17.389 tests 161 161 161 0 0 00:08:17.389 asserts 1025 1025 1025 0 n/a 00:08:17.389 00:08:17.389 Elapsed time = 1.522 seconds 00:08:17.389 0 00:08:17.389 17:58:33 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 62669 00:08:17.389 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 62669 ']' 00:08:17.389 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 62669 00:08:17.389 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:08:17.389 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:17.389 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62669 00:08:17.389 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:17.389 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:17.389 killing process with pid 62669 00:08:17.389 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62669' 00:08:17.389 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@971 -- # kill 62669 00:08:17.389 17:58:33 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@976 -- # wait 62669 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:08:18.323 00:08:18.323 real 0m2.737s 00:08:18.323 user 0m7.017s 00:08:18.323 sys 0m0.346s 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:18.323 ************************************ 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:08:18.323 END TEST bdev_bounds 00:08:18.323 ************************************ 00:08:18.323 17:58:34 blockdev_nvme_gpt -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:18.323 17:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:08:18.323 17:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:18.323 17:58:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:18.323 ************************************ 00:08:18.323 START TEST bdev_nbd 00:08:18.323 ************************************ 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=62730 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 62730 /var/tmp/spdk-nbd.sock 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 62730 ']' 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:18.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:18.323 17:58:34 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:18.581 [2024-10-28 17:58:34.870138] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:08:18.581 [2024-10-28 17:58:34.870290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.581 [2024-10-28 17:58:35.049556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.838 [2024-10-28 17:58:35.175525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:19.777 17:58:35 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:19.777 1+0 records in 00:08:19.777 1+0 records out 00:08:19.777 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000518449 s, 7.9 MB/s 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:19.777 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:20.035 1+0 records in 00:08:20.035 1+0 records out 00:08:20.035 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000476799 s, 8.6 MB/s 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:20.035 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:20.602 1+0 records in 00:08:20.602 1+0 records out 00:08:20.602 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000540878 s, 7.6 MB/s 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:20.602 17:58:36 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:08:20.860 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:08:20.860 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:08:20.860 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:20.861 1+0 records in 00:08:20.861 1+0 records out 00:08:20.861 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000463788 s, 8.8 MB/s 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:20.861 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.151 1+0 records in 00:08:21.151 1+0 records out 00:08:21.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000647947 s, 6.3 MB/s 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:21.151 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.410 1+0 records in 00:08:21.410 1+0 records out 00:08:21.410 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000734974 s, 5.6 MB/s 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:21.410 17:58:37 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:08:21.667 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:08:21.667 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:08:21.667 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:08:21.667 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd6 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd6 /proc/partitions 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:21.668 1+0 records in 00:08:21.668 1+0 records out 00:08:21.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000612068 s, 6.7 MB/s 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:08:21.668 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:22.234 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:08:22.234 { 00:08:22.234 "nbd_device": "/dev/nbd0", 00:08:22.234 "bdev_name": "Nvme0n1" 00:08:22.234 }, 00:08:22.234 { 00:08:22.234 "nbd_device": "/dev/nbd1", 00:08:22.234 "bdev_name": "Nvme1n1p1" 00:08:22.234 }, 00:08:22.235 { 00:08:22.235 "nbd_device": "/dev/nbd2", 00:08:22.235 "bdev_name": "Nvme1n1p2" 00:08:22.235 }, 00:08:22.235 { 00:08:22.235 "nbd_device": "/dev/nbd3", 00:08:22.235 "bdev_name": "Nvme2n1" 00:08:22.235 }, 00:08:22.235 { 00:08:22.236 "nbd_device": "/dev/nbd4", 00:08:22.236 "bdev_name": "Nvme2n2" 00:08:22.236 }, 00:08:22.236 { 00:08:22.236 "nbd_device": "/dev/nbd5", 00:08:22.236 "bdev_name": "Nvme2n3" 00:08:22.236 }, 00:08:22.236 { 00:08:22.236 "nbd_device": "/dev/nbd6", 00:08:22.236 "bdev_name": "Nvme3n1" 00:08:22.236 } 00:08:22.236 ]' 00:08:22.236 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:08:22.236 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:08:22.236 { 00:08:22.236 "nbd_device": "/dev/nbd0", 00:08:22.236 "bdev_name": "Nvme0n1" 00:08:22.236 }, 00:08:22.236 { 00:08:22.236 "nbd_device": "/dev/nbd1", 00:08:22.236 "bdev_name": "Nvme1n1p1" 00:08:22.236 }, 00:08:22.236 { 00:08:22.236 "nbd_device": "/dev/nbd2", 00:08:22.236 "bdev_name": "Nvme1n1p2" 00:08:22.236 }, 00:08:22.236 { 00:08:22.236 "nbd_device": "/dev/nbd3", 00:08:22.236 "bdev_name": "Nvme2n1" 00:08:22.236 }, 00:08:22.236 { 00:08:22.236 "nbd_device": "/dev/nbd4", 00:08:22.236 "bdev_name": "Nvme2n2" 00:08:22.236 }, 00:08:22.236 { 00:08:22.236 "nbd_device": "/dev/nbd5", 00:08:22.236 "bdev_name": "Nvme2n3" 00:08:22.236 }, 00:08:22.236 { 00:08:22.236 "nbd_device": "/dev/nbd6", 00:08:22.236 "bdev_name": "Nvme3n1" 00:08:22.236 } 00:08:22.236 ]' 00:08:22.236 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:08:22.236 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:08:22.236 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:22.237 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:08:22.237 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:22.237 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:22.237 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.237 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:22.496 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:22.496 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:22.496 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:22.496 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.496 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.496 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:22.496 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:22.496 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.496 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.496 17:58:38 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:22.753 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:22.753 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:22.753 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:22.753 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:22.753 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:22.753 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:22.753 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:22.753 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:22.753 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:22.753 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:08:23.011 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:08:23.011 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:08:23.011 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:08:23.011 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.011 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.011 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:08:23.011 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:23.011 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.011 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.011 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:08:23.269 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:08:23.269 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:08:23.269 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:08:23.269 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.269 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.269 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:08:23.526 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:23.526 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.526 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.526 17:58:39 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:08:23.784 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:08:23.784 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:08:23.784 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:08:23.784 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:23.784 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:23.784 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:08:23.784 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:23.784 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:23.785 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.785 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:08:24.043 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:08:24.043 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:08:24.043 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:08:24.043 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.043 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.043 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:08:24.043 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.043 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.043 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:24.043 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:08:24.301 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:08:24.301 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:08:24.301 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:08:24.301 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.301 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.301 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:08:24.301 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:24.301 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.301 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:24.301 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.301 17:58:40 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:24.867 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:24.868 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:08:25.127 /dev/nbd0 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.127 1+0 records in 00:08:25.127 1+0 records out 00:08:25.127 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000597078 s, 6.9 MB/s 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:25.127 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:08:25.385 /dev/nbd1 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.385 1+0 records in 00:08:25.385 1+0 records out 00:08:25.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000548946 s, 7.5 MB/s 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:25.385 17:58:41 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:08:25.643 /dev/nbd10 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:25.900 1+0 records in 00:08:25.900 1+0 records out 00:08:25.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735535 s, 5.6 MB/s 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:25.900 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:25.901 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:08:26.158 /dev/nbd11 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.158 1+0 records in 00:08:26.158 1+0 records out 00:08:26.158 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000667601 s, 6.1 MB/s 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:26.158 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:08:26.416 /dev/nbd12 00:08:26.416 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:08:26.416 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:08:26.416 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:08:26.416 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:26.416 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:26.416 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:26.416 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:08:26.416 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:26.416 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:26.416 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:26.416 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.416 1+0 records in 00:08:26.416 1+0 records out 00:08:26.416 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000742932 s, 5.5 MB/s 00:08:26.417 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.417 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:26.417 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.417 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:26.417 17:58:42 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:26.417 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.417 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:26.417 17:58:42 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:08:26.675 /dev/nbd13 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:26.675 1+0 records in 00:08:26.675 1+0 records out 00:08:26.675 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000678796 s, 6.0 MB/s 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:26.675 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:26.934 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:26.934 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:26.934 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:26.934 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:26.934 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:08:26.934 /dev/nbd14 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd14 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd14 /proc/partitions 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.192 1+0 records in 00:08:27.192 1+0 records out 00:08:27.192 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000628188 s, 6.5 MB/s 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:27.192 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:27.450 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd0", 00:08:27.450 "bdev_name": "Nvme0n1" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd1", 00:08:27.450 "bdev_name": "Nvme1n1p1" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd10", 00:08:27.450 "bdev_name": "Nvme1n1p2" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd11", 00:08:27.450 "bdev_name": "Nvme2n1" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd12", 00:08:27.450 "bdev_name": "Nvme2n2" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd13", 00:08:27.450 "bdev_name": "Nvme2n3" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd14", 00:08:27.450 "bdev_name": "Nvme3n1" 00:08:27.450 } 00:08:27.450 ]' 00:08:27.450 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:27.450 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd0", 00:08:27.450 "bdev_name": "Nvme0n1" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd1", 00:08:27.450 "bdev_name": "Nvme1n1p1" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd10", 00:08:27.450 "bdev_name": "Nvme1n1p2" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd11", 00:08:27.450 "bdev_name": "Nvme2n1" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd12", 00:08:27.450 "bdev_name": "Nvme2n2" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd13", 00:08:27.450 "bdev_name": "Nvme2n3" 00:08:27.450 }, 00:08:27.450 { 00:08:27.450 "nbd_device": "/dev/nbd14", 00:08:27.450 "bdev_name": "Nvme3n1" 00:08:27.450 } 00:08:27.450 ]' 00:08:27.450 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:27.450 /dev/nbd1 00:08:27.450 /dev/nbd10 00:08:27.450 /dev/nbd11 00:08:27.450 /dev/nbd12 00:08:27.450 /dev/nbd13 00:08:27.450 /dev/nbd14' 00:08:27.450 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:27.450 /dev/nbd1 00:08:27.450 /dev/nbd10 00:08:27.450 /dev/nbd11 00:08:27.450 /dev/nbd12 00:08:27.450 /dev/nbd13 00:08:27.450 /dev/nbd14' 00:08:27.450 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:27.450 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:08:27.450 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:08:27.450 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:08:27.450 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:08:27.451 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:08:27.451 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:27.451 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:27.451 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:27.451 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:27.451 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:27.451 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:08:27.451 256+0 records in 00:08:27.451 256+0 records out 00:08:27.451 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00612567 s, 171 MB/s 00:08:27.451 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:27.451 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:27.708 256+0 records in 00:08:27.708 256+0 records out 00:08:27.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.13256 s, 7.9 MB/s 00:08:27.708 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:27.708 17:58:43 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:27.708 256+0 records in 00:08:27.708 256+0 records out 00:08:27.708 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.167618 s, 6.3 MB/s 00:08:27.708 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:27.708 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:08:27.983 256+0 records in 00:08:27.983 256+0 records out 00:08:27.983 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.174 s, 6.0 MB/s 00:08:27.983 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:27.983 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:08:28.272 256+0 records in 00:08:28.272 256+0 records out 00:08:28.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.168037 s, 6.2 MB/s 00:08:28.272 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.272 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:08:28.272 256+0 records in 00:08:28.272 256+0 records out 00:08:28.272 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.148749 s, 7.0 MB/s 00:08:28.272 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.272 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:08:28.531 256+0 records in 00:08:28.531 256+0 records out 00:08:28.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.17899 s, 5.9 MB/s 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:08:28.531 256+0 records in 00:08:28.531 256+0 records out 00:08:28.531 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.14811 s, 7.1 MB/s 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.531 17:58:44 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:08:28.531 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.531 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:08:28.789 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.789 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:08:28.789 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:28.789 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:08:28.789 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:08:28.789 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:08:28.789 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:28.789 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:08:28.789 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:28.790 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:28.790 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.790 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:29.047 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:29.047 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:29.047 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:29.047 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.047 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.047 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:29.047 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:29.047 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.047 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.047 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:29.306 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:29.306 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:29.306 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:29.306 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.306 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.306 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:29.306 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:29.306 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.306 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.306 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:08:29.564 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:08:29.564 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:08:29.564 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:08:29.564 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:29.564 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:29.564 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:08:29.564 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:29.564 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:29.564 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:29.564 17:58:45 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:08:29.822 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:08:30.081 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:08:30.081 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:08:30.081 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.081 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.081 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:08:30.081 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.081 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.081 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.081 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:08:30.339 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:08:30.339 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:08:30.339 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:08:30.339 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.339 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.339 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:08:30.339 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.339 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.339 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.339 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:08:30.597 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:08:30.597 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:08:30.597 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:08:30.597 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.597 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.597 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:08:30.597 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.597 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.597 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:30.597 17:58:46 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:08:30.855 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:08:30.855 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:08:30.855 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:08:30.855 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:30.855 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:30.855 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:08:30.855 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:30.855 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:30.855 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:30.855 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:30.855 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:31.113 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:31.113 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:31.113 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:08:31.371 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:31.628 malloc_lvol_verify 00:08:31.628 17:58:47 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:31.885 28f313b6-63b5-442c-8bc3-5294154ba74c 00:08:31.886 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:32.143 3530313a-d973-44fd-a415-f93678ff18db 00:08:32.143 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:32.401 /dev/nbd0 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:08:32.401 mke2fs 1.47.0 (5-Feb-2023) 00:08:32.401 Discarding device blocks: 0/4096 done 00:08:32.401 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:32.401 00:08:32.401 Allocating group tables: 0/1 done 00:08:32.401 Writing inode tables: 0/1 done 00:08:32.401 Creating journal (1024 blocks): done 00:08:32.401 Writing superblocks and filesystem accounting information: 0/1 done 00:08:32.401 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:32.401 17:58:48 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 62730 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 62730 ']' 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 62730 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 62730 00:08:32.968 killing process with pid 62730 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 62730' 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@971 -- # kill 62730 00:08:32.968 17:58:49 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@976 -- # wait 62730 00:08:33.901 17:58:50 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:08:33.901 00:08:33.901 real 0m15.512s 00:08:33.901 user 0m22.744s 00:08:33.901 sys 0m4.756s 00:08:33.901 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.901 17:58:50 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:08:33.901 ************************************ 00:08:33.901 END TEST bdev_nbd 00:08:33.901 ************************************ 00:08:33.901 skipping fio tests on NVMe due to multi-ns failures. 00:08:33.901 17:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:08:33.901 17:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = nvme ']' 00:08:33.902 17:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@763 -- # '[' gpt = gpt ']' 00:08:33.902 17:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@765 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:08:33.902 17:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:33.902 17:58:50 blockdev_nvme_gpt -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:33.902 17:58:50 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:33.902 17:58:50 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:33.902 17:58:50 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:33.902 ************************************ 00:08:33.902 START TEST bdev_verify 00:08:33.902 ************************************ 00:08:33.902 17:58:50 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:34.160 [2024-10-28 17:58:50.438971] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:08:34.160 [2024-10-28 17:58:50.439139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63190 ] 00:08:34.160 [2024-10-28 17:58:50.616543] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.418 [2024-10-28 17:58:50.748119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.418 [2024-10-28 17:58:50.748119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.350 Running I/O for 5 seconds... 00:08:37.658 19456.00 IOPS, 76.00 MiB/s [2024-10-28T17:58:55.071Z] 18816.00 IOPS, 73.50 MiB/s [2024-10-28T17:58:56.004Z] 19328.00 IOPS, 75.50 MiB/s [2024-10-28T17:58:56.937Z] 18864.00 IOPS, 73.69 MiB/s [2024-10-28T17:58:56.937Z] 18790.40 IOPS, 73.40 MiB/s 00:08:40.459 Latency(us) 00:08:40.459 [2024-10-28T17:58:56.937Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:40.459 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x0 length 0xbd0bd 00:08:40.459 Nvme0n1 : 5.05 1316.74 5.14 0.00 0.00 96752.98 20137.43 91035.46 00:08:40.459 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:08:40.459 Nvme0n1 : 5.10 1330.26 5.20 0.00 0.00 96005.73 19779.96 85315.96 00:08:40.459 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x0 length 0x4ff80 00:08:40.459 Nvme1n1p1 : 5.06 1316.20 5.14 0.00 0.00 96602.41 23592.96 87699.08 00:08:40.459 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x4ff80 length 0x4ff80 00:08:40.459 Nvme1n1p1 : 5.10 1329.67 5.19 0.00 0.00 95887.73 18230.92 82932.83 00:08:40.459 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x0 length 0x4ff7f 00:08:40.459 Nvme1n1p2 : 5.09 1320.84 5.16 0.00 0.00 96149.37 11915.64 84839.33 00:08:40.459 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:08:40.459 Nvme1n1p2 : 5.10 1329.13 5.19 0.00 0.00 95786.73 18707.55 82932.83 00:08:40.459 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x0 length 0x80000 00:08:40.459 Nvme2n1 : 5.09 1320.31 5.16 0.00 0.00 96016.28 12094.37 85315.96 00:08:40.459 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x80000 length 0x80000 00:08:40.459 Nvme2n1 : 5.11 1327.98 5.19 0.00 0.00 95678.87 21924.77 79596.45 00:08:40.459 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x0 length 0x80000 00:08:40.459 Nvme2n2 : 5.09 1319.69 5.16 0.00 0.00 95893.54 12571.00 83409.45 00:08:40.459 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x80000 length 0x80000 00:08:40.459 Nvme2n2 : 5.11 1327.51 5.19 0.00 0.00 95550.42 22282.24 82456.20 00:08:40.459 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x0 length 0x80000 00:08:40.459 Nvme2n3 : 5.10 1328.92 5.19 0.00 0.00 95349.18 9651.67 87222.46 00:08:40.459 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x80000 length 0x80000 00:08:40.459 Nvme2n3 : 5.11 1327.05 5.18 0.00 0.00 95418.66 22401.40 82932.83 00:08:40.459 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x0 length 0x20000 00:08:40.459 Nvme3n1 : 5.11 1328.36 5.19 0.00 0.00 95233.21 10068.71 90558.84 00:08:40.459 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:40.459 Verification LBA range: start 0x20000 length 0x20000 00:08:40.459 Nvme3n1 : 5.11 1326.59 5.18 0.00 0.00 95287.10 15847.80 85315.96 00:08:40.459 [2024-10-28T17:58:56.937Z] =================================================================================================================== 00:08:40.459 [2024-10-28T17:58:56.937Z] Total : 18549.24 72.46 0.00 0.00 95826.76 9651.67 91035.46 00:08:41.831 00:08:41.831 real 0m7.669s 00:08:41.831 user 0m14.164s 00:08:41.831 sys 0m0.264s 00:08:41.831 17:58:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:41.831 ************************************ 00:08:41.831 END TEST bdev_verify 00:08:41.831 ************************************ 00:08:41.831 17:58:57 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:08:41.831 17:58:58 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:41.831 17:58:58 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:08:41.831 17:58:58 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:41.831 17:58:58 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:41.831 ************************************ 00:08:41.831 START TEST bdev_verify_big_io 00:08:41.831 ************************************ 00:08:41.831 17:58:58 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:41.831 [2024-10-28 17:58:58.132954] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:08:41.831 [2024-10-28 17:58:58.133101] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63294 ] 00:08:42.114 [2024-10-28 17:58:58.312966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:42.114 [2024-10-28 17:58:58.446641] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.114 [2024-10-28 17:58:58.446647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.048 Running I/O for 5 seconds... 00:08:48.865 1229.00 IOPS, 76.81 MiB/s [2024-10-28T17:59:05.601Z] 2777.00 IOPS, 173.56 MiB/s [2024-10-28T17:59:05.601Z] 3231.67 IOPS, 201.98 MiB/s 00:08:49.123 Latency(us) 00:08:49.123 [2024-10-28T17:59:05.601Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.123 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.123 Verification LBA range: start 0x0 length 0xbd0b 00:08:49.123 Nvme0n1 : 5.84 109.56 6.85 0.00 0.00 1101418.03 16086.11 1143901.09 00:08:49.123 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.123 Verification LBA range: start 0xbd0b length 0xbd0b 00:08:49.123 Nvme0n1 : 5.69 119.71 7.48 0.00 0.00 1034162.45 66250.94 1121023.07 00:08:49.123 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.123 Verification LBA range: start 0x0 length 0x4ff8 00:08:49.123 Nvme1n1p1 : 5.84 106.25 6.64 0.00 0.00 1123992.58 61961.31 1654843.58 00:08:49.123 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.124 Verification LBA range: start 0x4ff8 length 0x4ff8 00:08:49.124 Nvme1n1p1 : 5.80 85.48 5.34 0.00 0.00 1393140.65 163005.91 1814989.73 00:08:49.124 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.124 Verification LBA range: start 0x0 length 0x4ff7 00:08:49.124 Nvme1n1p2 : 5.85 106.02 6.63 0.00 0.00 1089147.97 72923.69 1670095.59 00:08:49.124 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.124 Verification LBA range: start 0x4ff7 length 0x4ff7 00:08:49.124 Nvme1n1p2 : 6.03 79.66 4.98 0.00 0.00 1456008.35 144894.14 1814989.73 00:08:49.124 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.124 Verification LBA range: start 0x0 length 0x8000 00:08:49.124 Nvme2n1 : 5.93 110.17 6.89 0.00 0.00 1020545.97 75306.82 1708225.63 00:08:49.124 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.124 Verification LBA range: start 0x8000 length 0x8000 00:08:49.124 Nvme2n1 : 5.87 130.16 8.14 0.00 0.00 877841.15 63391.19 968502.92 00:08:49.124 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.124 Verification LBA range: start 0x0 length 0x8000 00:08:49.124 Nvme2n2 : 6.05 119.09 7.44 0.00 0.00 925656.94 25141.99 1738729.66 00:08:49.124 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.124 Verification LBA range: start 0x8000 length 0x8000 00:08:49.124 Nvme2n2 : 5.92 134.29 8.39 0.00 0.00 830207.35 46947.61 991380.95 00:08:49.124 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.124 Verification LBA range: start 0x0 length 0x8000 00:08:49.124 Nvme2n3 : 6.07 122.63 7.66 0.00 0.00 869725.02 17515.99 1761607.68 00:08:49.124 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.124 Verification LBA range: start 0x8000 length 0x8000 00:08:49.124 Nvme2n3 : 5.97 139.43 8.71 0.00 0.00 779177.21 40989.79 1014258.97 00:08:49.124 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:49.124 Verification LBA range: start 0x0 length 0x2000 00:08:49.124 Nvme3n1 : 6.13 143.87 8.99 0.00 0.00 728234.24 1414.98 1784485.70 00:08:49.124 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:49.124 Verification LBA range: start 0x2000 length 0x2000 00:08:49.124 Nvme3n1 : 6.03 153.64 9.60 0.00 0.00 692305.70 2517.18 1052389.00 00:08:49.124 [2024-10-28T17:59:05.602Z] =================================================================================================================== 00:08:49.124 [2024-10-28T17:59:05.602Z] Total : 1659.96 103.75 0.00 0.00 955385.18 1414.98 1814989.73 00:08:51.039 00:08:51.039 real 0m9.094s 00:08:51.039 user 0m16.993s 00:08:51.039 sys 0m0.304s 00:08:51.039 17:59:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:51.039 ************************************ 00:08:51.039 END TEST bdev_verify_big_io 00:08:51.039 ************************************ 00:08:51.039 17:59:07 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:08:51.039 17:59:07 blockdev_nvme_gpt -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:51.039 17:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:51.039 17:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:51.039 17:59:07 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:51.039 ************************************ 00:08:51.039 START TEST bdev_write_zeroes 00:08:51.039 ************************************ 00:08:51.039 17:59:07 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:51.039 [2024-10-28 17:59:07.289052] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:08:51.039 [2024-10-28 17:59:07.289224] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63409 ] 00:08:51.039 [2024-10-28 17:59:07.479335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.297 [2024-10-28 17:59:07.603697] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.863 Running I/O for 1 seconds... 00:08:53.238 44288.00 IOPS, 173.00 MiB/s 00:08:53.238 Latency(us) 00:08:53.238 [2024-10-28T17:59:09.716Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.238 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.238 Nvme0n1 : 1.04 6293.84 24.59 0.00 0.00 20277.67 14894.55 38606.66 00:08:53.238 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.238 Nvme1n1p1 : 1.04 6283.48 24.54 0.00 0.00 20277.48 14596.65 38368.35 00:08:53.238 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.238 Nvme1n1p2 : 1.04 6273.39 24.51 0.00 0.00 20224.89 14596.65 36700.16 00:08:53.238 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.239 Nvme2n1 : 1.04 6264.04 24.47 0.00 0.00 20181.63 10307.03 35270.28 00:08:53.239 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.239 Nvme2n2 : 1.04 6254.83 24.43 0.00 0.00 20154.74 9711.24 34793.66 00:08:53.239 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.239 Nvme2n3 : 1.05 6245.59 24.40 0.00 0.00 20130.32 8698.41 36938.47 00:08:53.239 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:53.239 Nvme3n1 : 1.05 6175.23 24.12 0.00 0.00 20311.98 14298.76 39559.91 00:08:53.239 [2024-10-28T17:59:09.717Z] =================================================================================================================== 00:08:53.239 [2024-10-28T17:59:09.717Z] Total : 43790.41 171.06 0.00 0.00 20222.55 8698.41 39559.91 00:08:54.182 ************************************ 00:08:54.182 END TEST bdev_write_zeroes 00:08:54.182 ************************************ 00:08:54.182 00:08:54.182 real 0m3.249s 00:08:54.182 user 0m2.884s 00:08:54.182 sys 0m0.237s 00:08:54.182 17:59:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.182 17:59:10 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 17:59:10 blockdev_nvme_gpt -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.182 17:59:10 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:54.182 17:59:10 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.182 17:59:10 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:54.182 ************************************ 00:08:54.183 START TEST bdev_json_nonenclosed 00:08:54.183 ************************************ 00:08:54.183 17:59:10 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:54.183 [2024-10-28 17:59:10.596373] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:08:54.183 [2024-10-28 17:59:10.596558] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63462 ] 00:08:54.441 [2024-10-28 17:59:10.784526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.441 [2024-10-28 17:59:10.910337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.441 [2024-10-28 17:59:10.910459] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:54.441 [2024-10-28 17:59:10.910492] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:54.441 [2024-10-28 17:59:10.910510] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.006 00:08:55.006 real 0m0.701s 00:08:55.006 user 0m0.468s 00:08:55.006 sys 0m0.126s 00:08:55.006 ************************************ 00:08:55.006 END TEST bdev_json_nonenclosed 00:08:55.006 ************************************ 00:08:55.006 17:59:11 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:55.006 17:59:11 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:08:55.006 17:59:11 blockdev_nvme_gpt -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.006 17:59:11 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:08:55.006 17:59:11 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:55.006 17:59:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.006 ************************************ 00:08:55.006 START TEST bdev_json_nonarray 00:08:55.006 ************************************ 00:08:55.006 17:59:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:55.006 [2024-10-28 17:59:11.337019] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:08:55.006 [2024-10-28 17:59:11.337186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63488 ] 00:08:55.264 [2024-10-28 17:59:11.523907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.264 [2024-10-28 17:59:11.649052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:55.264 [2024-10-28 17:59:11.649405] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:55.264 [2024-10-28 17:59:11.649450] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:08:55.264 [2024-10-28 17:59:11.649468] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:55.522 ************************************ 00:08:55.522 END TEST bdev_json_nonarray 00:08:55.522 ************************************ 00:08:55.522 00:08:55.522 real 0m0.697s 00:08:55.522 user 0m0.474s 00:08:55.522 sys 0m0.116s 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:08:55.522 17:59:11 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # [[ gpt == bdev ]] 00:08:55.522 17:59:11 blockdev_nvme_gpt -- bdev/blockdev.sh@793 -- # [[ gpt == gpt ]] 00:08:55.522 17:59:11 blockdev_nvme_gpt -- bdev/blockdev.sh@794 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:08:55.522 17:59:11 blockdev_nvme_gpt -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:55.522 17:59:11 blockdev_nvme_gpt -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:55.522 17:59:11 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:08:55.522 ************************************ 00:08:55.522 START TEST bdev_gpt_uuid 00:08:55.522 ************************************ 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1127 -- # bdev_gpt_uuid 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@613 -- # local bdev 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@615 -- # start_spdk_tgt 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63518 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63518 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@833 -- # '[' -z 63518 ']' 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.522 17:59:11 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:55.780 [2024-10-28 17:59:12.117530] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:08:55.780 [2024-10-28 17:59:12.117710] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63518 ] 00:08:56.038 [2024-10-28 17:59:12.300741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.038 [2024-10-28 17:59:12.426071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.970 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.970 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@866 -- # return 0 00:08:56.970 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@617 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:08:56.970 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:56.970 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:57.228 Some configs were skipped because the RPC state that can call them passed over. 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@618 -- # rpc_cmd bdev_wait_for_examine 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@620 -- # bdev='[ 00:08:57.228 { 00:08:57.228 "name": "Nvme1n1p1", 00:08:57.228 "aliases": [ 00:08:57.228 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:08:57.228 ], 00:08:57.228 "product_name": "GPT Disk", 00:08:57.228 "block_size": 4096, 00:08:57.228 "num_blocks": 655104, 00:08:57.228 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:57.228 "assigned_rate_limits": { 00:08:57.228 "rw_ios_per_sec": 0, 00:08:57.228 "rw_mbytes_per_sec": 0, 00:08:57.228 "r_mbytes_per_sec": 0, 00:08:57.228 "w_mbytes_per_sec": 0 00:08:57.228 }, 00:08:57.228 "claimed": false, 00:08:57.228 "zoned": false, 00:08:57.228 "supported_io_types": { 00:08:57.228 "read": true, 00:08:57.228 "write": true, 00:08:57.228 "unmap": true, 00:08:57.228 "flush": true, 00:08:57.228 "reset": true, 00:08:57.228 "nvme_admin": false, 00:08:57.228 "nvme_io": false, 00:08:57.228 "nvme_io_md": false, 00:08:57.228 "write_zeroes": true, 00:08:57.228 "zcopy": false, 00:08:57.228 "get_zone_info": false, 00:08:57.228 "zone_management": false, 00:08:57.228 "zone_append": false, 00:08:57.228 "compare": true, 00:08:57.228 "compare_and_write": false, 00:08:57.228 "abort": true, 00:08:57.228 "seek_hole": false, 00:08:57.228 "seek_data": false, 00:08:57.228 "copy": true, 00:08:57.228 "nvme_iov_md": false 00:08:57.228 }, 00:08:57.228 "driver_specific": { 00:08:57.228 "gpt": { 00:08:57.228 "base_bdev": "Nvme1n1", 00:08:57.228 "offset_blocks": 256, 00:08:57.228 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:08:57.228 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:08:57.228 "partition_name": "SPDK_TEST_first" 00:08:57.228 } 00:08:57.228 } 00:08:57.228 } 00:08:57.228 ]' 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # jq -r length 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@621 -- # [[ 1 == \1 ]] 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # jq -r '.[0].aliases[0]' 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@622 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:57.228 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@623 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@625 -- # bdev='[ 00:08:57.487 { 00:08:57.487 "name": "Nvme1n1p2", 00:08:57.487 "aliases": [ 00:08:57.487 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:08:57.487 ], 00:08:57.487 "product_name": "GPT Disk", 00:08:57.487 "block_size": 4096, 00:08:57.487 "num_blocks": 655103, 00:08:57.487 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:57.487 "assigned_rate_limits": { 00:08:57.487 "rw_ios_per_sec": 0, 00:08:57.487 "rw_mbytes_per_sec": 0, 00:08:57.487 "r_mbytes_per_sec": 0, 00:08:57.487 "w_mbytes_per_sec": 0 00:08:57.487 }, 00:08:57.487 "claimed": false, 00:08:57.487 "zoned": false, 00:08:57.487 "supported_io_types": { 00:08:57.487 "read": true, 00:08:57.487 "write": true, 00:08:57.487 "unmap": true, 00:08:57.487 "flush": true, 00:08:57.487 "reset": true, 00:08:57.487 "nvme_admin": false, 00:08:57.487 "nvme_io": false, 00:08:57.487 "nvme_io_md": false, 00:08:57.487 "write_zeroes": true, 00:08:57.487 "zcopy": false, 00:08:57.487 "get_zone_info": false, 00:08:57.487 "zone_management": false, 00:08:57.487 "zone_append": false, 00:08:57.487 "compare": true, 00:08:57.487 "compare_and_write": false, 00:08:57.487 "abort": true, 00:08:57.487 "seek_hole": false, 00:08:57.487 "seek_data": false, 00:08:57.487 "copy": true, 00:08:57.487 "nvme_iov_md": false 00:08:57.487 }, 00:08:57.487 "driver_specific": { 00:08:57.487 "gpt": { 00:08:57.487 "base_bdev": "Nvme1n1", 00:08:57.487 "offset_blocks": 655360, 00:08:57.487 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:08:57.487 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:08:57.487 "partition_name": "SPDK_TEST_second" 00:08:57.487 } 00:08:57.487 } 00:08:57.487 } 00:08:57.487 ]' 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # jq -r length 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@626 -- # [[ 1 == \1 ]] 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # jq -r '.[0].aliases[0]' 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@627 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@628 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@630 -- # killprocess 63518 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@952 -- # '[' -z 63518 ']' 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@956 -- # kill -0 63518 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # uname 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 63518 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:57.487 killing process with pid 63518 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 63518' 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@971 -- # kill 63518 00:08:57.487 17:59:13 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@976 -- # wait 63518 00:09:00.018 ************************************ 00:09:00.018 END TEST bdev_gpt_uuid 00:09:00.018 ************************************ 00:09:00.018 00:09:00.018 real 0m4.118s 00:09:00.018 user 0m4.548s 00:09:00.018 sys 0m0.461s 00:09:00.018 17:59:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:00.018 17:59:16 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:09:00.018 17:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # [[ gpt == crypto_sw ]] 00:09:00.018 17:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:09:00.018 17:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@810 -- # cleanup 00:09:00.018 17:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:09:00.018 17:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:09:00.018 17:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:09:00.018 17:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:09:00.018 17:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:09:00.018 17:59:16 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:00.276 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.276 Waiting for block devices as requested 00:09:00.276 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.534 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.534 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.534 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:05.853 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:05.853 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:09:05.853 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:09:06.111 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:09:06.111 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:09:06.111 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:09:06.111 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:09:06.111 17:59:22 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:09:06.111 00:09:06.111 real 1m5.942s 00:09:06.111 user 1m25.928s 00:09:06.111 sys 0m9.883s 00:09:06.111 17:59:22 blockdev_nvme_gpt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:06.111 17:59:22 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:09:06.111 ************************************ 00:09:06.111 END TEST blockdev_nvme_gpt 00:09:06.111 ************************************ 00:09:06.111 17:59:22 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:06.111 17:59:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:06.111 17:59:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:06.111 17:59:22 -- common/autotest_common.sh@10 -- # set +x 00:09:06.111 ************************************ 00:09:06.111 START TEST nvme 00:09:06.111 ************************************ 00:09:06.111 17:59:22 nvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:09:06.111 * Looking for test storage... 00:09:06.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:06.111 17:59:22 nvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:06.111 17:59:22 nvme -- common/autotest_common.sh@1691 -- # lcov --version 00:09:06.111 17:59:22 nvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:06.111 17:59:22 nvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:06.111 17:59:22 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.111 17:59:22 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.111 17:59:22 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.111 17:59:22 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.111 17:59:22 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.111 17:59:22 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.111 17:59:22 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.111 17:59:22 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.111 17:59:22 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.111 17:59:22 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.111 17:59:22 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.111 17:59:22 nvme -- scripts/common.sh@344 -- # case "$op" in 00:09:06.111 17:59:22 nvme -- scripts/common.sh@345 -- # : 1 00:09:06.111 17:59:22 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.111 17:59:22 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.111 17:59:22 nvme -- scripts/common.sh@365 -- # decimal 1 00:09:06.111 17:59:22 nvme -- scripts/common.sh@353 -- # local d=1 00:09:06.111 17:59:22 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.111 17:59:22 nvme -- scripts/common.sh@355 -- # echo 1 00:09:06.111 17:59:22 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.111 17:59:22 nvme -- scripts/common.sh@366 -- # decimal 2 00:09:06.111 17:59:22 nvme -- scripts/common.sh@353 -- # local d=2 00:09:06.111 17:59:22 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.111 17:59:22 nvme -- scripts/common.sh@355 -- # echo 2 00:09:06.111 17:59:22 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.111 17:59:22 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.111 17:59:22 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.111 17:59:22 nvme -- scripts/common.sh@368 -- # return 0 00:09:06.111 17:59:22 nvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.111 17:59:22 nvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:06.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.111 --rc genhtml_branch_coverage=1 00:09:06.111 --rc genhtml_function_coverage=1 00:09:06.111 --rc genhtml_legend=1 00:09:06.111 --rc geninfo_all_blocks=1 00:09:06.111 --rc geninfo_unexecuted_blocks=1 00:09:06.111 00:09:06.111 ' 00:09:06.111 17:59:22 nvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:06.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.111 --rc genhtml_branch_coverage=1 00:09:06.111 --rc genhtml_function_coverage=1 00:09:06.111 --rc genhtml_legend=1 00:09:06.111 --rc geninfo_all_blocks=1 00:09:06.111 --rc geninfo_unexecuted_blocks=1 00:09:06.111 00:09:06.111 ' 00:09:06.111 17:59:22 nvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:06.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.111 --rc genhtml_branch_coverage=1 00:09:06.111 --rc genhtml_function_coverage=1 00:09:06.111 --rc genhtml_legend=1 00:09:06.111 --rc geninfo_all_blocks=1 00:09:06.111 --rc geninfo_unexecuted_blocks=1 00:09:06.111 00:09:06.111 ' 00:09:06.111 17:59:22 nvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:06.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.111 --rc genhtml_branch_coverage=1 00:09:06.111 --rc genhtml_function_coverage=1 00:09:06.111 --rc genhtml_legend=1 00:09:06.111 --rc geninfo_all_blocks=1 00:09:06.111 --rc geninfo_unexecuted_blocks=1 00:09:06.111 00:09:06.111 ' 00:09:06.111 17:59:22 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:06.677 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:07.243 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.243 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.243 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.243 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.502 17:59:23 nvme -- nvme/nvme.sh@79 -- # uname 00:09:07.502 17:59:23 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:09:07.502 17:59:23 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:09:07.502 17:59:23 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:09:07.502 17:59:23 nvme -- common/autotest_common.sh@1084 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:09:07.502 17:59:23 nvme -- common/autotest_common.sh@1070 -- # _randomize_va_space=2 00:09:07.502 17:59:23 nvme -- common/autotest_common.sh@1071 -- # echo 0 00:09:07.502 Waiting for stub to ready for secondary processes... 00:09:07.502 17:59:23 nvme -- common/autotest_common.sh@1073 -- # stubpid=64173 00:09:07.502 17:59:23 nvme -- common/autotest_common.sh@1074 -- # echo Waiting for stub to ready for secondary processes... 00:09:07.502 17:59:23 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:07.502 17:59:23 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64173 ]] 00:09:07.502 17:59:23 nvme -- common/autotest_common.sh@1072 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:09:07.502 17:59:23 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:09:07.502 [2024-10-28 17:59:23.819708] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:09:07.502 [2024-10-28 17:59:23.819942] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:09:08.437 [2024-10-28 17:59:24.620654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:08.437 [2024-10-28 17:59:24.745513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:08.437 [2024-10-28 17:59:24.745643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.437 [2024-10-28 17:59:24.745643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:08.437 17:59:24 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:08.437 17:59:24 nvme -- common/autotest_common.sh@1077 -- # [[ -e /proc/64173 ]] 00:09:08.437 17:59:24 nvme -- common/autotest_common.sh@1078 -- # sleep 1s 00:09:08.437 [2024-10-28 17:59:24.767560] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:09:08.437 [2024-10-28 17:59:24.767619] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:08.437 [2024-10-28 17:59:24.779407] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:09:08.437 [2024-10-28 17:59:24.779548] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:09:08.437 [2024-10-28 17:59:24.783900] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:08.437 [2024-10-28 17:59:24.784136] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:09:08.437 [2024-10-28 17:59:24.784300] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:09:08.437 [2024-10-28 17:59:24.786641] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:08.437 [2024-10-28 17:59:24.786863] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:09:08.437 [2024-10-28 17:59:24.786955] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:09:08.437 [2024-10-28 17:59:24.789611] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:09:08.437 [2024-10-28 17:59:24.789807] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:09:08.437 [2024-10-28 17:59:24.790018] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:09:08.437 [2024-10-28 17:59:24.790083] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:09:08.437 [2024-10-28 17:59:24.790141] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:09:09.369 done. 00:09:09.369 17:59:25 nvme -- common/autotest_common.sh@1075 -- # '[' -e /var/run/spdk_stub0 ']' 00:09:09.369 17:59:25 nvme -- common/autotest_common.sh@1080 -- # echo done. 00:09:09.369 17:59:25 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:09.369 17:59:25 nvme -- common/autotest_common.sh@1103 -- # '[' 10 -le 1 ']' 00:09:09.369 17:59:25 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.369 17:59:25 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.369 ************************************ 00:09:09.369 START TEST nvme_reset 00:09:09.369 ************************************ 00:09:09.369 17:59:25 nvme.nvme_reset -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:09:09.934 Initializing NVMe Controllers 00:09:09.934 Skipping QEMU NVMe SSD at 0000:00:10.0 00:09:09.934 Skipping QEMU NVMe SSD at 0000:00:11.0 00:09:09.934 Skipping QEMU NVMe SSD at 0000:00:13.0 00:09:09.934 Skipping QEMU NVMe SSD at 0000:00:12.0 00:09:09.934 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:09:09.934 ************************************ 00:09:09.934 END TEST nvme_reset 00:09:09.934 ************************************ 00:09:09.934 00:09:09.934 real 0m0.335s 00:09:09.934 user 0m0.127s 00:09:09.934 sys 0m0.156s 00:09:09.934 17:59:26 nvme.nvme_reset -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.934 17:59:26 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:09:09.934 17:59:26 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:09:09.934 17:59:26 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:09.934 17:59:26 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.934 17:59:26 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:09.934 ************************************ 00:09:09.934 START TEST nvme_identify 00:09:09.934 ************************************ 00:09:09.934 17:59:26 nvme.nvme_identify -- common/autotest_common.sh@1127 -- # nvme_identify 00:09:09.934 17:59:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:09:09.934 17:59:26 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:09:09.934 17:59:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:09:09.934 17:59:26 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:09:09.934 17:59:26 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:09.934 17:59:26 nvme.nvme_identify -- common/autotest_common.sh@1496 -- # local bdfs 00:09:09.934 17:59:26 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:09.934 17:59:26 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:09.934 17:59:26 nvme.nvme_identify -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:09.934 17:59:26 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:09.934 17:59:26 nvme.nvme_identify -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:09.934 17:59:26 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:09:10.221 [2024-10-28 17:59:26.520170] nvme_ctrlr.c:3642:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64207 terminated unexpected 00:09:10.221 ===================================================== 00:09:10.221 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:10.221 ===================================================== 00:09:10.221 Controller Capabilities/Features 00:09:10.221 ================================ 00:09:10.221 Vendor ID: 1b36 00:09:10.221 Subsystem Vendor ID: 1af4 00:09:10.221 Serial Number: 12340 00:09:10.221 Model Number: QEMU NVMe Ctrl 00:09:10.221 Firmware Version: 8.0.0 00:09:10.221 Recommended Arb Burst: 6 00:09:10.221 IEEE OUI Identifier: 00 54 52 00:09:10.221 Multi-path I/O 00:09:10.221 May have multiple subsystem ports: No 00:09:10.221 May have multiple controllers: No 00:09:10.221 Associated with SR-IOV VF: No 00:09:10.221 Max Data Transfer Size: 524288 00:09:10.221 Max Number of Namespaces: 256 00:09:10.221 Max Number of I/O Queues: 64 00:09:10.221 NVMe Specification Version (VS): 1.4 00:09:10.221 NVMe Specification Version (Identify): 1.4 00:09:10.221 Maximum Queue Entries: 2048 00:09:10.221 Contiguous Queues Required: Yes 00:09:10.221 Arbitration Mechanisms Supported 00:09:10.221 Weighted Round Robin: Not Supported 00:09:10.221 Vendor Specific: Not Supported 00:09:10.221 Reset Timeout: 7500 ms 00:09:10.221 Doorbell Stride: 4 bytes 00:09:10.221 NVM Subsystem Reset: Not Supported 00:09:10.221 Command Sets Supported 00:09:10.221 NVM Command Set: Supported 00:09:10.221 Boot Partition: Not Supported 00:09:10.221 Memory Page Size Minimum: 4096 bytes 00:09:10.221 Memory Page Size Maximum: 65536 bytes 00:09:10.221 Persistent Memory Region: Not Supported 00:09:10.221 Optional Asynchronous Events Supported 00:09:10.221 Namespace Attribute Notices: Supported 00:09:10.221 Firmware Activation Notices: Not Supported 00:09:10.221 ANA Change Notices: Not Supported 00:09:10.221 PLE Aggregate Log Change Notices: Not Supported 00:09:10.221 LBA Status Info Alert Notices: Not Supported 00:09:10.221 EGE Aggregate Log Change Notices: Not Supported 00:09:10.221 Normal NVM Subsystem Shutdown event: Not Supported 00:09:10.221 Zone Descriptor Change Notices: Not Supported 00:09:10.221 Discovery Log Change Notices: Not Supported 00:09:10.221 Controller Attributes 00:09:10.221 128-bit Host Identifier: Not Supported 00:09:10.221 Non-Operational Permissive Mode: Not Supported 00:09:10.221 NVM Sets: Not Supported 00:09:10.221 Read Recovery Levels: Not Supported 00:09:10.221 Endurance Groups: Not Supported 00:09:10.221 Predictable Latency Mode: Not Supported 00:09:10.221 Traffic Based Keep ALive: Not Supported 00:09:10.221 Namespace Granularity: Not Supported 00:09:10.221 SQ Associations: Not Supported 00:09:10.221 UUID List: Not Supported 00:09:10.221 Multi-Domain Subsystem: Not Supported 00:09:10.221 Fixed Capacity Management: Not Supported 00:09:10.222 Variable Capacity Management: Not Supported 00:09:10.222 Delete Endurance Group: Not Supported 00:09:10.222 Delete NVM Set: Not Supported 00:09:10.222 Extended LBA Formats Supported: Supported 00:09:10.222 Flexible Data Placement Supported: Not Supported 00:09:10.222 00:09:10.222 Controller Memory Buffer Support 00:09:10.222 ================================ 00:09:10.222 Supported: No 00:09:10.222 00:09:10.222 Persistent Memory Region Support 00:09:10.222 ================================ 00:09:10.222 Supported: No 00:09:10.222 00:09:10.222 Admin Command Set Attributes 00:09:10.222 ============================ 00:09:10.222 Security Send/Receive: Not Supported 00:09:10.222 Format NVM: Supported 00:09:10.222 Firmware Activate/Download: Not Supported 00:09:10.222 Namespace Management: Supported 00:09:10.222 Device Self-Test: Not Supported 00:09:10.222 Directives: Supported 00:09:10.222 NVMe-MI: Not Supported 00:09:10.222 Virtualization Management: Not Supported 00:09:10.222 Doorbell Buffer Config: Supported 00:09:10.222 Get LBA Status Capability: Not Supported 00:09:10.222 Command & Feature Lockdown Capability: Not Supported 00:09:10.222 Abort Command Limit: 4 00:09:10.222 Async Event Request Limit: 4 00:09:10.222 Number of Firmware Slots: N/A 00:09:10.222 Firmware Slot 1 Read-Only: N/A 00:09:10.222 Firmware Activation Without Reset: N/A 00:09:10.222 Multiple Update Detection Support: N/A 00:09:10.222 Firmware Update Granularity: No Information Provided 00:09:10.222 Per-Namespace SMART Log: Yes 00:09:10.222 Asymmetric Namespace Access Log Page: Not Supported 00:09:10.222 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:10.222 Command Effects Log Page: Supported 00:09:10.222 Get Log Page Extended Data: Supported 00:09:10.222 Telemetry Log Pages: Not Supported 00:09:10.222 Persistent Event Log Pages: Not Supported 00:09:10.222 Supported Log Pages Log Page: May Support 00:09:10.222 Commands Supported & Effects Log Page: Not Supported 00:09:10.222 Feature Identifiers & Effects Log Page:May Support 00:09:10.222 NVMe-MI Commands & Effects Log Page: May Support 00:09:10.222 Data Area 4 for Telemetry Log: Not Supported 00:09:10.222 Error Log Page Entries Supported: 1 00:09:10.222 Keep Alive: Not Supported 00:09:10.222 00:09:10.222 NVM Command Set Attributes 00:09:10.222 ========================== 00:09:10.222 Submission Queue Entry Size 00:09:10.222 Max: 64 00:09:10.222 Min: 64 00:09:10.222 Completion Queue Entry Size 00:09:10.222 Max: 16 00:09:10.222 Min: 16 00:09:10.222 Number of Namespaces: 256 00:09:10.222 Compare Command: Supported 00:09:10.222 Write Uncorrectable Command: Not Supported 00:09:10.222 Dataset Management Command: Supported 00:09:10.222 Write Zeroes Command: Supported 00:09:10.222 Set Features Save Field: Supported 00:09:10.222 Reservations: Not Supported 00:09:10.222 Timestamp: Supported 00:09:10.222 Copy: Supported 00:09:10.222 Volatile Write Cache: Present 00:09:10.222 Atomic Write Unit (Normal): 1 00:09:10.222 Atomic Write Unit (PFail): 1 00:09:10.222 Atomic Compare & Write Unit: 1 00:09:10.222 Fused Compare & Write: Not Supported 00:09:10.222 Scatter-Gather List 00:09:10.222 SGL Command Set: Supported 00:09:10.222 SGL Keyed: Not Supported 00:09:10.222 SGL Bit Bucket Descriptor: Not Supported 00:09:10.222 SGL Metadata Pointer: Not Supported 00:09:10.222 Oversized SGL: Not Supported 00:09:10.222 SGL Metadata Address: Not Supported 00:09:10.222 SGL Offset: Not Supported 00:09:10.222 Transport SGL Data Block: Not Supported 00:09:10.222 Replay Protected Memory Block: Not Supported 00:09:10.222 00:09:10.222 Firmware Slot Information 00:09:10.222 ========================= 00:09:10.222 Active slot: 1 00:09:10.222 Slot 1 Firmware Revision: 1.0 00:09:10.222 00:09:10.222 00:09:10.222 Commands Supported and Effects 00:09:10.222 ============================== 00:09:10.222 Admin Commands 00:09:10.222 -------------- 00:09:10.222 Delete I/O Submission Queue (00h): Supported 00:09:10.222 Create I/O Submission Queue (01h): Supported 00:09:10.222 Get Log Page (02h): Supported 00:09:10.222 Delete I/O Completion Queue (04h): Supported 00:09:10.222 Create I/O Completion Queue (05h): Supported 00:09:10.222 Identify (06h): Supported 00:09:10.222 Abort (08h): Supported 00:09:10.222 Set Features (09h): Supported 00:09:10.222 Get Features (0Ah): Supported 00:09:10.222 Asynchronous Event Request (0Ch): Supported 00:09:10.222 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:10.222 Directive Send (19h): Supported 00:09:10.222 Directive Receive (1Ah): Supported 00:09:10.222 Virtualization Management (1Ch): Supported 00:09:10.222 Doorbell Buffer Config (7Ch): Supported 00:09:10.222 Format NVM (80h): Supported LBA-Change 00:09:10.222 I/O Commands 00:09:10.222 ------------ 00:09:10.222 Flush (00h): Supported LBA-Change 00:09:10.222 Write (01h): Supported LBA-Change 00:09:10.222 Read (02h): Supported 00:09:10.222 Compare (05h): Supported 00:09:10.222 Write Zeroes (08h): Supported LBA-Change 00:09:10.222 Dataset Management (09h): Supported LBA-Change 00:09:10.222 Unknown (0Ch): Supported 00:09:10.222 Unknown (12h): Supported 00:09:10.222 Copy (19h): Supported LBA-Change 00:09:10.222 Unknown (1Dh): Supported LBA-Change 00:09:10.222 00:09:10.222 Error Log 00:09:10.222 ========= 00:09:10.222 00:09:10.222 Arbitration 00:09:10.222 =========== 00:09:10.222 Arbitration Burst: no limit 00:09:10.222 00:09:10.222 Power Management 00:09:10.222 ================ 00:09:10.222 Number of Power States: 1 00:09:10.222 Current Power State: Power State #0 00:09:10.222 Power State #0: 00:09:10.222 Max Power: 25.00 W 00:09:10.222 Non-Operational State: Operational 00:09:10.222 Entry Latency: 16 microseconds 00:09:10.222 Exit Latency: 4 microseconds 00:09:10.222 Relative Read Throughput: 0 00:09:10.222 Relative Read Latency: 0 00:09:10.222 Relative Write Throughput: 0 00:09:10.222 Relative Write Latency: 0 00:09:10.222 Idle Power[2024-10-28 17:59:26.521441] nvme_ctrlr.c:3642:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64207 terminated unexpected 00:09:10.222 : Not Reported 00:09:10.222 Active Power: Not Reported 00:09:10.222 Non-Operational Permissive Mode: Not Supported 00:09:10.222 00:09:10.222 Health Information 00:09:10.222 ================== 00:09:10.222 Critical Warnings: 00:09:10.222 Available Spare Space: OK 00:09:10.222 Temperature: OK 00:09:10.222 Device Reliability: OK 00:09:10.222 Read Only: No 00:09:10.222 Volatile Memory Backup: OK 00:09:10.222 Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.222 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:10.222 Available Spare: 0% 00:09:10.222 Available Spare Threshold: 0% 00:09:10.222 Life Percentage Used: 0% 00:09:10.222 Data Units Read: 658 00:09:10.222 Data Units Written: 586 00:09:10.222 Host Read Commands: 33117 00:09:10.222 Host Write Commands: 32903 00:09:10.222 Controller Busy Time: 0 minutes 00:09:10.222 Power Cycles: 0 00:09:10.222 Power On Hours: 0 hours 00:09:10.222 Unsafe Shutdowns: 0 00:09:10.222 Unrecoverable Media Errors: 0 00:09:10.222 Lifetime Error Log Entries: 0 00:09:10.222 Warning Temperature Time: 0 minutes 00:09:10.222 Critical Temperature Time: 0 minutes 00:09:10.222 00:09:10.222 Number of Queues 00:09:10.222 ================ 00:09:10.222 Number of I/O Submission Queues: 64 00:09:10.222 Number of I/O Completion Queues: 64 00:09:10.222 00:09:10.222 ZNS Specific Controller Data 00:09:10.222 ============================ 00:09:10.222 Zone Append Size Limit: 0 00:09:10.222 00:09:10.222 00:09:10.222 Active Namespaces 00:09:10.222 ================= 00:09:10.222 Namespace ID:1 00:09:10.222 Error Recovery Timeout: Unlimited 00:09:10.222 Command Set Identifier: NVM (00h) 00:09:10.222 Deallocate: Supported 00:09:10.222 Deallocated/Unwritten Error: Supported 00:09:10.222 Deallocated Read Value: All 0x00 00:09:10.222 Deallocate in Write Zeroes: Not Supported 00:09:10.222 Deallocated Guard Field: 0xFFFF 00:09:10.222 Flush: Supported 00:09:10.222 Reservation: Not Supported 00:09:10.222 Metadata Transferred as: Separate Metadata Buffer 00:09:10.222 Namespace Sharing Capabilities: Private 00:09:10.222 Size (in LBAs): 1548666 (5GiB) 00:09:10.222 Capacity (in LBAs): 1548666 (5GiB) 00:09:10.222 Utilization (in LBAs): 1548666 (5GiB) 00:09:10.222 Thin Provisioning: Not Supported 00:09:10.222 Per-NS Atomic Units: No 00:09:10.222 Maximum Single Source Range Length: 128 00:09:10.222 Maximum Copy Length: 128 00:09:10.222 Maximum Source Range Count: 128 00:09:10.222 NGUID/EUI64 Never Reused: No 00:09:10.222 Namespace Write Protected: No 00:09:10.222 Number of LBA Formats: 8 00:09:10.222 Current LBA Format: LBA Format #07 00:09:10.222 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:10.222 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:10.222 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:10.222 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:10.222 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:10.222 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:10.222 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:10.222 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:10.222 00:09:10.222 NVM Specific Namespace Data 00:09:10.223 =========================== 00:09:10.223 Logical Block Storage Tag Mask: 0 00:09:10.223 Protection Information Capabilities: 00:09:10.223 16b Guard Protection Information Storage Tag Support: No 00:09:10.223 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:10.223 Storage Tag Check Read Support: No 00:09:10.223 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.223 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.223 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.223 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.223 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.223 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.223 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.223 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.223 ===================================================== 00:09:10.223 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:10.223 ===================================================== 00:09:10.223 Controller Capabilities/Features 00:09:10.223 ================================ 00:09:10.223 Vendor ID: 1b36 00:09:10.223 Subsystem Vendor ID: 1af4 00:09:10.223 Serial Number: 12341 00:09:10.223 Model Number: QEMU NVMe Ctrl 00:09:10.223 Firmware Version: 8.0.0 00:09:10.223 Recommended Arb Burst: 6 00:09:10.223 IEEE OUI Identifier: 00 54 52 00:09:10.223 Multi-path I/O 00:09:10.223 May have multiple subsystem ports: No 00:09:10.223 May have multiple controllers: No 00:09:10.223 Associated with SR-IOV VF: No 00:09:10.223 Max Data Transfer Size: 524288 00:09:10.223 Max Number of Namespaces: 256 00:09:10.223 Max Number of I/O Queues: 64 00:09:10.223 NVMe Specification Version (VS): 1.4 00:09:10.223 NVMe Specification Version (Identify): 1.4 00:09:10.223 Maximum Queue Entries: 2048 00:09:10.223 Contiguous Queues Required: Yes 00:09:10.223 Arbitration Mechanisms Supported 00:09:10.223 Weighted Round Robin: Not Supported 00:09:10.223 Vendor Specific: Not Supported 00:09:10.223 Reset Timeout: 7500 ms 00:09:10.223 Doorbell Stride: 4 bytes 00:09:10.223 NVM Subsystem Reset: Not Supported 00:09:10.223 Command Sets Supported 00:09:10.223 NVM Command Set: Supported 00:09:10.223 Boot Partition: Not Supported 00:09:10.223 Memory Page Size Minimum: 4096 bytes 00:09:10.223 Memory Page Size Maximum: 65536 bytes 00:09:10.223 Persistent Memory Region: Not Supported 00:09:10.223 Optional Asynchronous Events Supported 00:09:10.223 Namespace Attribute Notices: Supported 00:09:10.223 Firmware Activation Notices: Not Supported 00:09:10.223 ANA Change Notices: Not Supported 00:09:10.223 PLE Aggregate Log Change Notices: Not Supported 00:09:10.223 LBA Status Info Alert Notices: Not Supported 00:09:10.223 EGE Aggregate Log Change Notices: Not Supported 00:09:10.223 Normal NVM Subsystem Shutdown event: Not Supported 00:09:10.223 Zone Descriptor Change Notices: Not Supported 00:09:10.223 Discovery Log Change Notices: Not Supported 00:09:10.223 Controller Attributes 00:09:10.223 128-bit Host Identifier: Not Supported 00:09:10.223 Non-Operational Permissive Mode: Not Supported 00:09:10.223 NVM Sets: Not Supported 00:09:10.223 Read Recovery Levels: Not Supported 00:09:10.223 Endurance Groups: Not Supported 00:09:10.223 Predictable Latency Mode: Not Supported 00:09:10.223 Traffic Based Keep ALive: Not Supported 00:09:10.223 Namespace Granularity: Not Supported 00:09:10.223 SQ Associations: Not Supported 00:09:10.223 UUID List: Not Supported 00:09:10.223 Multi-Domain Subsystem: Not Supported 00:09:10.223 Fixed Capacity Management: Not Supported 00:09:10.223 Variable Capacity Management: Not Supported 00:09:10.223 Delete Endurance Group: Not Supported 00:09:10.223 Delete NVM Set: Not Supported 00:09:10.223 Extended LBA Formats Supported: Supported 00:09:10.223 Flexible Data Placement Supported: Not Supported 00:09:10.223 00:09:10.223 Controller Memory Buffer Support 00:09:10.223 ================================ 00:09:10.223 Supported: No 00:09:10.223 00:09:10.223 Persistent Memory Region Support 00:09:10.223 ================================ 00:09:10.223 Supported: No 00:09:10.223 00:09:10.223 Admin Command Set Attributes 00:09:10.223 ============================ 00:09:10.223 Security Send/Receive: Not Supported 00:09:10.223 Format NVM: Supported 00:09:10.223 Firmware Activate/Download: Not Supported 00:09:10.223 Namespace Management: Supported 00:09:10.223 Device Self-Test: Not Supported 00:09:10.223 Directives: Supported 00:09:10.223 NVMe-MI: Not Supported 00:09:10.223 Virtualization Management: Not Supported 00:09:10.223 Doorbell Buffer Config: Supported 00:09:10.223 Get LBA Status Capability: Not Supported 00:09:10.223 Command & Feature Lockdown Capability: Not Supported 00:09:10.223 Abort Command Limit: 4 00:09:10.223 Async Event Request Limit: 4 00:09:10.223 Number of Firmware Slots: N/A 00:09:10.223 Firmware Slot 1 Read-Only: N/A 00:09:10.223 Firmware Activation Without Reset: N/A 00:09:10.223 Multiple Update Detection Support: N/A 00:09:10.223 Firmware Update Granularity: No Information Provided 00:09:10.223 Per-Namespace SMART Log: Yes 00:09:10.223 Asymmetric Namespace Access Log Page: Not Supported 00:09:10.223 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:10.223 Command Effects Log Page: Supported 00:09:10.223 Get Log Page Extended Data: Supported 00:09:10.223 Telemetry Log Pages: Not Supported 00:09:10.223 Persistent Event Log Pages: Not Supported 00:09:10.223 Supported Log Pages Log Page: May Support 00:09:10.223 Commands Supported & Effects Log Page: Not Supported 00:09:10.223 Feature Identifiers & Effects Log Page:May Support 00:09:10.223 NVMe-MI Commands & Effects Log Page: May Support 00:09:10.223 Data Area 4 for Telemetry Log: Not Supported 00:09:10.223 Error Log Page Entries Supported: 1 00:09:10.223 Keep Alive: Not Supported 00:09:10.223 00:09:10.223 NVM Command Set Attributes 00:09:10.223 ========================== 00:09:10.223 Submission Queue Entry Size 00:09:10.223 Max: 64 00:09:10.223 Min: 64 00:09:10.223 Completion Queue Entry Size 00:09:10.223 Max: 16 00:09:10.223 Min: 16 00:09:10.223 Number of Namespaces: 256 00:09:10.223 Compare Command: Supported 00:09:10.223 Write Uncorrectable Command: Not Supported 00:09:10.223 Dataset Management Command: Supported 00:09:10.223 Write Zeroes Command: Supported 00:09:10.223 Set Features Save Field: Supported 00:09:10.223 Reservations: Not Supported 00:09:10.223 Timestamp: Supported 00:09:10.223 Copy: Supported 00:09:10.223 Volatile Write Cache: Present 00:09:10.223 Atomic Write Unit (Normal): 1 00:09:10.223 Atomic Write Unit (PFail): 1 00:09:10.223 Atomic Compare & Write Unit: 1 00:09:10.223 Fused Compare & Write: Not Supported 00:09:10.223 Scatter-Gather List 00:09:10.223 SGL Command Set: Supported 00:09:10.223 SGL Keyed: Not Supported 00:09:10.223 SGL Bit Bucket Descriptor: Not Supported 00:09:10.223 SGL Metadata Pointer: Not Supported 00:09:10.223 Oversized SGL: Not Supported 00:09:10.223 SGL Metadata Address: Not Supported 00:09:10.223 SGL Offset: Not Supported 00:09:10.223 Transport SGL Data Block: Not Supported 00:09:10.223 Replay Protected Memory Block: Not Supported 00:09:10.223 00:09:10.223 Firmware Slot Information 00:09:10.223 ========================= 00:09:10.223 Active slot: 1 00:09:10.223 Slot 1 Firmware Revision: 1.0 00:09:10.223 00:09:10.223 00:09:10.223 Commands Supported and Effects 00:09:10.223 ============================== 00:09:10.223 Admin Commands 00:09:10.223 -------------- 00:09:10.223 Delete I/O Submission Queue (00h): Supported 00:09:10.223 Create I/O Submission Queue (01h): Supported 00:09:10.223 Get Log Page (02h): Supported 00:09:10.223 Delete I/O Completion Queue (04h): Supported 00:09:10.223 Create I/O Completion Queue (05h): Supported 00:09:10.223 Identify (06h): Supported 00:09:10.223 Abort (08h): Supported 00:09:10.223 Set Features (09h): Supported 00:09:10.223 Get Features (0Ah): Supported 00:09:10.223 Asynchronous Event Request (0Ch): Supported 00:09:10.223 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:10.223 Directive Send (19h): Supported 00:09:10.223 Directive Receive (1Ah): Supported 00:09:10.223 Virtualization Management (1Ch): Supported 00:09:10.223 Doorbell Buffer Config (7Ch): Supported 00:09:10.223 Format NVM (80h): Supported LBA-Change 00:09:10.223 I/O Commands 00:09:10.223 ------------ 00:09:10.223 Flush (00h): Supported LBA-Change 00:09:10.223 Write (01h): Supported LBA-Change 00:09:10.223 Read (02h): Supported 00:09:10.223 Compare (05h): Supported 00:09:10.223 Write Zeroes (08h): Supported LBA-Change 00:09:10.223 Dataset Management (09h): Supported LBA-Change 00:09:10.223 Unknown (0Ch): Supported 00:09:10.223 Unknown (12h): Supported 00:09:10.223 Copy (19h): Supported LBA-Change 00:09:10.223 Unknown (1Dh): Supported LBA-Change 00:09:10.223 00:09:10.223 Error Log 00:09:10.223 ========= 00:09:10.223 00:09:10.223 Arbitration 00:09:10.223 =========== 00:09:10.223 Arbitration Burst: no limit 00:09:10.223 00:09:10.223 Power Management 00:09:10.223 ================ 00:09:10.223 Number of Power States: 1 00:09:10.223 Current Power State: Power State #0 00:09:10.223 Power State #0: 00:09:10.224 Max Power: 25.00 W 00:09:10.224 Non-Operational State: Operational 00:09:10.224 Entry Latency: 16 microseconds 00:09:10.224 Exit Latency: 4 microseconds 00:09:10.224 Relative Read Throughput: 0 00:09:10.224 Relative Read Latency: 0 00:09:10.224 Relative Write Throughput: 0 00:09:10.224 Relative Write Latency: 0 00:09:10.224 Idle Power: Not Reported 00:09:10.224 Active Power: Not Reported 00:09:10.224 Non-Operational Permissive Mode: Not Supported 00:09:10.224 00:09:10.224 Health Information 00:09:10.224 ================== 00:09:10.224 Critical Warnings: 00:09:10.224 Available Spare Space: OK 00:09:10.224 Temperature: [2024-10-28 17:59:26.522434] nvme_ctrlr.c:3642:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64207 terminated unexpected 00:09:10.224 OK 00:09:10.224 Device Reliability: OK 00:09:10.224 Read Only: No 00:09:10.224 Volatile Memory Backup: OK 00:09:10.224 Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.224 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:10.224 Available Spare: 0% 00:09:10.224 Available Spare Threshold: 0% 00:09:10.224 Life Percentage Used: 0% 00:09:10.224 Data Units Read: 950 00:09:10.224 Data Units Written: 816 00:09:10.224 Host Read Commands: 48635 00:09:10.224 Host Write Commands: 47427 00:09:10.224 Controller Busy Time: 0 minutes 00:09:10.224 Power Cycles: 0 00:09:10.224 Power On Hours: 0 hours 00:09:10.224 Unsafe Shutdowns: 0 00:09:10.224 Unrecoverable Media Errors: 0 00:09:10.224 Lifetime Error Log Entries: 0 00:09:10.224 Warning Temperature Time: 0 minutes 00:09:10.224 Critical Temperature Time: 0 minutes 00:09:10.224 00:09:10.224 Number of Queues 00:09:10.224 ================ 00:09:10.224 Number of I/O Submission Queues: 64 00:09:10.224 Number of I/O Completion Queues: 64 00:09:10.224 00:09:10.224 ZNS Specific Controller Data 00:09:10.224 ============================ 00:09:10.224 Zone Append Size Limit: 0 00:09:10.224 00:09:10.224 00:09:10.224 Active Namespaces 00:09:10.224 ================= 00:09:10.224 Namespace ID:1 00:09:10.224 Error Recovery Timeout: Unlimited 00:09:10.224 Command Set Identifier: NVM (00h) 00:09:10.224 Deallocate: Supported 00:09:10.224 Deallocated/Unwritten Error: Supported 00:09:10.224 Deallocated Read Value: All 0x00 00:09:10.224 Deallocate in Write Zeroes: Not Supported 00:09:10.224 Deallocated Guard Field: 0xFFFF 00:09:10.224 Flush: Supported 00:09:10.224 Reservation: Not Supported 00:09:10.224 Namespace Sharing Capabilities: Private 00:09:10.224 Size (in LBAs): 1310720 (5GiB) 00:09:10.224 Capacity (in LBAs): 1310720 (5GiB) 00:09:10.224 Utilization (in LBAs): 1310720 (5GiB) 00:09:10.224 Thin Provisioning: Not Supported 00:09:10.224 Per-NS Atomic Units: No 00:09:10.224 Maximum Single Source Range Length: 128 00:09:10.224 Maximum Copy Length: 128 00:09:10.224 Maximum Source Range Count: 128 00:09:10.224 NGUID/EUI64 Never Reused: No 00:09:10.224 Namespace Write Protected: No 00:09:10.224 Number of LBA Formats: 8 00:09:10.224 Current LBA Format: LBA Format #04 00:09:10.224 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:10.224 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:10.224 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:10.224 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:10.224 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:10.224 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:10.224 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:10.224 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:10.224 00:09:10.224 NVM Specific Namespace Data 00:09:10.224 =========================== 00:09:10.224 Logical Block Storage Tag Mask: 0 00:09:10.224 Protection Information Capabilities: 00:09:10.224 16b Guard Protection Information Storage Tag Support: No 00:09:10.224 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:10.224 Storage Tag Check Read Support: No 00:09:10.224 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.224 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.224 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.224 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.224 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.224 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.224 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.224 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.224 ===================================================== 00:09:10.224 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:10.224 ===================================================== 00:09:10.224 Controller Capabilities/Features 00:09:10.224 ================================ 00:09:10.224 Vendor ID: 1b36 00:09:10.224 Subsystem Vendor ID: 1af4 00:09:10.224 Serial Number: 12343 00:09:10.224 Model Number: QEMU NVMe Ctrl 00:09:10.224 Firmware Version: 8.0.0 00:09:10.224 Recommended Arb Burst: 6 00:09:10.224 IEEE OUI Identifier: 00 54 52 00:09:10.224 Multi-path I/O 00:09:10.224 May have multiple subsystem ports: No 00:09:10.224 May have multiple controllers: Yes 00:09:10.224 Associated with SR-IOV VF: No 00:09:10.224 Max Data Transfer Size: 524288 00:09:10.224 Max Number of Namespaces: 256 00:09:10.224 Max Number of I/O Queues: 64 00:09:10.224 NVMe Specification Version (VS): 1.4 00:09:10.224 NVMe Specification Version (Identify): 1.4 00:09:10.224 Maximum Queue Entries: 2048 00:09:10.224 Contiguous Queues Required: Yes 00:09:10.224 Arbitration Mechanisms Supported 00:09:10.224 Weighted Round Robin: Not Supported 00:09:10.224 Vendor Specific: Not Supported 00:09:10.224 Reset Timeout: 7500 ms 00:09:10.224 Doorbell Stride: 4 bytes 00:09:10.224 NVM Subsystem Reset: Not Supported 00:09:10.224 Command Sets Supported 00:09:10.224 NVM Command Set: Supported 00:09:10.224 Boot Partition: Not Supported 00:09:10.224 Memory Page Size Minimum: 4096 bytes 00:09:10.224 Memory Page Size Maximum: 65536 bytes 00:09:10.224 Persistent Memory Region: Not Supported 00:09:10.224 Optional Asynchronous Events Supported 00:09:10.224 Namespace Attribute Notices: Supported 00:09:10.224 Firmware Activation Notices: Not Supported 00:09:10.224 ANA Change Notices: Not Supported 00:09:10.224 PLE Aggregate Log Change Notices: Not Supported 00:09:10.224 LBA Status Info Alert Notices: Not Supported 00:09:10.224 EGE Aggregate Log Change Notices: Not Supported 00:09:10.224 Normal NVM Subsystem Shutdown event: Not Supported 00:09:10.224 Zone Descriptor Change Notices: Not Supported 00:09:10.224 Discovery Log Change Notices: Not Supported 00:09:10.224 Controller Attributes 00:09:10.224 128-bit Host Identifier: Not Supported 00:09:10.224 Non-Operational Permissive Mode: Not Supported 00:09:10.224 NVM Sets: Not Supported 00:09:10.224 Read Recovery Levels: Not Supported 00:09:10.224 Endurance Groups: Supported 00:09:10.224 Predictable Latency Mode: Not Supported 00:09:10.224 Traffic Based Keep ALive: Not Supported 00:09:10.224 Namespace Granularity: Not Supported 00:09:10.224 SQ Associations: Not Supported 00:09:10.224 UUID List: Not Supported 00:09:10.224 Multi-Domain Subsystem: Not Supported 00:09:10.224 Fixed Capacity Management: Not Supported 00:09:10.224 Variable Capacity Management: Not Supported 00:09:10.224 Delete Endurance Group: Not Supported 00:09:10.224 Delete NVM Set: Not Supported 00:09:10.224 Extended LBA Formats Supported: Supported 00:09:10.224 Flexible Data Placement Supported: Supported 00:09:10.224 00:09:10.224 Controller Memory Buffer Support 00:09:10.224 ================================ 00:09:10.224 Supported: No 00:09:10.224 00:09:10.224 Persistent Memory Region Support 00:09:10.224 ================================ 00:09:10.224 Supported: No 00:09:10.224 00:09:10.224 Admin Command Set Attributes 00:09:10.224 ============================ 00:09:10.224 Security Send/Receive: Not Supported 00:09:10.224 Format NVM: Supported 00:09:10.224 Firmware Activate/Download: Not Supported 00:09:10.224 Namespace Management: Supported 00:09:10.224 Device Self-Test: Not Supported 00:09:10.224 Directives: Supported 00:09:10.224 NVMe-MI: Not Supported 00:09:10.224 Virtualization Management: Not Supported 00:09:10.224 Doorbell Buffer Config: Supported 00:09:10.224 Get LBA Status Capability: Not Supported 00:09:10.224 Command & Feature Lockdown Capability: Not Supported 00:09:10.224 Abort Command Limit: 4 00:09:10.224 Async Event Request Limit: 4 00:09:10.224 Number of Firmware Slots: N/A 00:09:10.224 Firmware Slot 1 Read-Only: N/A 00:09:10.224 Firmware Activation Without Reset: N/A 00:09:10.224 Multiple Update Detection Support: N/A 00:09:10.224 Firmware Update Granularity: No Information Provided 00:09:10.224 Per-Namespace SMART Log: Yes 00:09:10.224 Asymmetric Namespace Access Log Page: Not Supported 00:09:10.224 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:10.224 Command Effects Log Page: Supported 00:09:10.224 Get Log Page Extended Data: Supported 00:09:10.224 Telemetry Log Pages: Not Supported 00:09:10.224 Persistent Event Log Pages: Not Supported 00:09:10.224 Supported Log Pages Log Page: May Support 00:09:10.224 Commands Supported & Effects Log Page: Not Supported 00:09:10.224 Feature Identifiers & Effects Log Page:May Support 00:09:10.224 NVMe-MI Commands & Effects Log Page: May Support 00:09:10.224 Data Area 4 for Telemetry Log: Not Supported 00:09:10.224 Error Log Page Entries Supported: 1 00:09:10.225 Keep Alive: Not Supported 00:09:10.225 00:09:10.225 NVM Command Set Attributes 00:09:10.225 ========================== 00:09:10.225 Submission Queue Entry Size 00:09:10.225 Max: 64 00:09:10.225 Min: 64 00:09:10.225 Completion Queue Entry Size 00:09:10.225 Max: 16 00:09:10.225 Min: 16 00:09:10.225 Number of Namespaces: 256 00:09:10.225 Compare Command: Supported 00:09:10.225 Write Uncorrectable Command: Not Supported 00:09:10.225 Dataset Management Command: Supported 00:09:10.225 Write Zeroes Command: Supported 00:09:10.225 Set Features Save Field: Supported 00:09:10.225 Reservations: Not Supported 00:09:10.225 Timestamp: Supported 00:09:10.225 Copy: Supported 00:09:10.225 Volatile Write Cache: Present 00:09:10.225 Atomic Write Unit (Normal): 1 00:09:10.225 Atomic Write Unit (PFail): 1 00:09:10.225 Atomic Compare & Write Unit: 1 00:09:10.225 Fused Compare & Write: Not Supported 00:09:10.225 Scatter-Gather List 00:09:10.225 SGL Command Set: Supported 00:09:10.225 SGL Keyed: Not Supported 00:09:10.225 SGL Bit Bucket Descriptor: Not Supported 00:09:10.225 SGL Metadata Pointer: Not Supported 00:09:10.225 Oversized SGL: Not Supported 00:09:10.225 SGL Metadata Address: Not Supported 00:09:10.225 SGL Offset: Not Supported 00:09:10.225 Transport SGL Data Block: Not Supported 00:09:10.225 Replay Protected Memory Block: Not Supported 00:09:10.225 00:09:10.225 Firmware Slot Information 00:09:10.225 ========================= 00:09:10.225 Active slot: 1 00:09:10.225 Slot 1 Firmware Revision: 1.0 00:09:10.225 00:09:10.225 00:09:10.225 Commands Supported and Effects 00:09:10.225 ============================== 00:09:10.225 Admin Commands 00:09:10.225 -------------- 00:09:10.225 Delete I/O Submission Queue (00h): Supported 00:09:10.225 Create I/O Submission Queue (01h): Supported 00:09:10.225 Get Log Page (02h): Supported 00:09:10.225 Delete I/O Completion Queue (04h): Supported 00:09:10.225 Create I/O Completion Queue (05h): Supported 00:09:10.225 Identify (06h): Supported 00:09:10.225 Abort (08h): Supported 00:09:10.225 Set Features (09h): Supported 00:09:10.225 Get Features (0Ah): Supported 00:09:10.225 Asynchronous Event Request (0Ch): Supported 00:09:10.225 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:10.225 Directive Send (19h): Supported 00:09:10.225 Directive Receive (1Ah): Supported 00:09:10.225 Virtualization Management (1Ch): Supported 00:09:10.225 Doorbell Buffer Config (7Ch): Supported 00:09:10.225 Format NVM (80h): Supported LBA-Change 00:09:10.225 I/O Commands 00:09:10.225 ------------ 00:09:10.225 Flush (00h): Supported LBA-Change 00:09:10.225 Write (01h): Supported LBA-Change 00:09:10.225 Read (02h): Supported 00:09:10.225 Compare (05h): Supported 00:09:10.225 Write Zeroes (08h): Supported LBA-Change 00:09:10.225 Dataset Management (09h): Supported LBA-Change 00:09:10.225 Unknown (0Ch): Supported 00:09:10.225 Unknown (12h): Supported 00:09:10.225 Copy (19h): Supported LBA-Change 00:09:10.225 Unknown (1Dh): Supported LBA-Change 00:09:10.225 00:09:10.225 Error Log 00:09:10.225 ========= 00:09:10.225 00:09:10.225 Arbitration 00:09:10.225 =========== 00:09:10.225 Arbitration Burst: no limit 00:09:10.225 00:09:10.225 Power Management 00:09:10.225 ================ 00:09:10.225 Number of Power States: 1 00:09:10.225 Current Power State: Power State #0 00:09:10.225 Power State #0: 00:09:10.225 Max Power: 25.00 W 00:09:10.225 Non-Operational State: Operational 00:09:10.225 Entry Latency: 16 microseconds 00:09:10.225 Exit Latency: 4 microseconds 00:09:10.225 Relative Read Throughput: 0 00:09:10.225 Relative Read Latency: 0 00:09:10.225 Relative Write Throughput: 0 00:09:10.225 Relative Write Latency: 0 00:09:10.225 Idle Power: Not Reported 00:09:10.225 Active Power: Not Reported 00:09:10.225 Non-Operational Permissive Mode: Not Supported 00:09:10.225 00:09:10.225 Health Information 00:09:10.225 ================== 00:09:10.225 Critical Warnings: 00:09:10.225 Available Spare Space: OK 00:09:10.225 Temperature: OK 00:09:10.225 Device Reliability: OK 00:09:10.225 Read Only: No 00:09:10.225 Volatile Memory Backup: OK 00:09:10.225 Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.225 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:10.225 Available Spare: 0% 00:09:10.225 Available Spare Threshold: 0% 00:09:10.225 Life Percentage Used: 0% 00:09:10.225 Data Units Read: 790 00:09:10.225 Data Units Written: 719 00:09:10.225 Host Read Commands: 34626 00:09:10.225 Host Write Commands: 34049 00:09:10.225 Controller Busy Time: 0 minutes 00:09:10.225 Power Cycles: 0 00:09:10.225 Power On Hours: 0 hours 00:09:10.225 Unsafe Shutdowns: 0 00:09:10.225 Unrecoverable Media Errors: 0 00:09:10.225 Lifetime Error Log Entries: 0 00:09:10.225 Warning Temperature Time: 0 minutes 00:09:10.225 Critical Temperature Time: 0 minutes 00:09:10.225 00:09:10.225 Number of Queues 00:09:10.225 ================ 00:09:10.225 Number of I/O Submission Queues: 64 00:09:10.225 Number of I/O Completion Queues: 64 00:09:10.225 00:09:10.225 ZNS Specific Controller Data 00:09:10.225 ============================ 00:09:10.225 Zone Append Size Limit: 0 00:09:10.225 00:09:10.225 00:09:10.225 Active Namespaces 00:09:10.225 ================= 00:09:10.225 Namespace ID:1 00:09:10.225 Error Recovery Timeout: Unlimited 00:09:10.225 Command Set Identifier: NVM (00h) 00:09:10.225 Deallocate: Supported 00:09:10.225 Deallocated/Unwritten Error: Supported 00:09:10.225 Deallocated Read Value: All 0x00 00:09:10.225 Deallocate in Write Zeroes: Not Supported 00:09:10.225 Deallocated Guard Field: 0xFFFF 00:09:10.225 Flush: Supported 00:09:10.225 Reservation: Not Supported 00:09:10.225 Namespace Sharing Capabilities: Multiple Controllers 00:09:10.225 Size (in LBAs): 262144 (1GiB) 00:09:10.225 Capacity (in LBAs): 262144 (1GiB) 00:09:10.225 Utilization (in LBAs): 262144 (1GiB) 00:09:10.225 Thin Provisioning: Not Supported 00:09:10.225 Per-NS Atomic Units: No 00:09:10.225 Maximum Single Source Range Length: 128 00:09:10.225 Maximum Copy Length: 128 00:09:10.225 Maximum Source Range Count: 128 00:09:10.225 NGUID/EUI64 Never Reused: No 00:09:10.225 Namespace Write Protected: No 00:09:10.225 Endurance group ID: 1 00:09:10.225 Number of LBA Formats: 8 00:09:10.250 Current LBA Format: LBA Format #04 00:09:10.250 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:10.250 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:10.250 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:10.250 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:10.250 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:10.250 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:10.250 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:10.250 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:10.250 00:09:10.250 Get Feature FDP: 00:09:10.250 ================ 00:09:10.250 Enabled: Yes 00:09:10.251 FDP configuration index: 0 00:09:10.251 00:09:10.251 FDP configurations log page 00:09:10.251 =========================== 00:09:10.251 Number of FDP configurations: 1 00:09:10.251 Version: 0 00:09:10.251 Size: 112 00:09:10.251 FDP Configuration Descriptor: 0 00:09:10.251 Descriptor Size: 96 00:09:10.251 Reclaim Group Identifier format: 2 00:09:10.251 FDP Volatile Write Cache: Not Present 00:09:10.251 FDP Configuration: Valid 00:09:10.251 Vendor Specific Size: 0 00:09:10.251 Number of Reclaim Groups: 2 00:09:10.251 Number of Recalim Unit Handles: 8 00:09:10.251 Max Placement Identifiers: 128 00:09:10.251 Number of Namespaces Suppprted: 256 00:09:10.251 Reclaim unit Nominal Size: 6000000 bytes 00:09:10.251 Estimated Reclaim Unit Time Limit: Not Reported 00:09:10.251 RUH Desc #000: RUH Type: Initially Isolated 00:09:10.251 RUH Desc #001: RUH Type: Initially Isolated 00:09:10.251 RUH Desc #002: RUH Type: Initially Isolated 00:09:10.251 RUH Desc #003: RUH Type: Initially Isolated 00:09:10.251 RUH Desc #004: RUH Type: Initially Isolated 00:09:10.251 RUH Desc #005: RUH Type: Initially Isolated 00:09:10.251 RUH Desc #006: RUH Type: Initially Isolated 00:09:10.251 RUH Desc #007: RUH Type: Initially Isolated 00:09:10.251 00:09:10.251 FDP reclaim unit handle usage log page 00:09:10.251 ====================================== 00:09:10.251 Number of Reclaim Unit Handles: 8 00:09:10.251 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:10.251 RUH Usage Desc #001: RUH Attributes: Unused 00:09:10.251 RUH Usage Desc #002: RUH Attributes: Unused 00:09:10.251 RUH Usage Desc #003: RUH Attributes: Unused 00:09:10.251 RUH Usage Desc #004: RUH Attributes: Unused 00:09:10.251 RUH Usage Desc #005: RUH Attributes: Unused 00:09:10.251 RUH Usage Desc #006: RUH Attributes: Unused 00:09:10.251 RUH Usage Desc #007: RUH Attributes: Unused 00:09:10.251 00:09:10.251 FDP statistics log page 00:09:10.251 ======================= 00:09:10.251 Host bytes with metadata written: 445030400 00:09:10.251 Media[2024-10-28 17:59:26.524114] nvme_ctrlr.c:3642:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64207 terminated unexpected 00:09:10.251 bytes with metadata written: 445095936 00:09:10.251 Media bytes erased: 0 00:09:10.251 00:09:10.251 FDP events log page 00:09:10.251 =================== 00:09:10.251 Number of FDP events: 0 00:09:10.251 00:09:10.251 NVM Specific Namespace Data 00:09:10.251 =========================== 00:09:10.251 Logical Block Storage Tag Mask: 0 00:09:10.251 Protection Information Capabilities: 00:09:10.251 16b Guard Protection Information Storage Tag Support: No 00:09:10.251 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:10.251 Storage Tag Check Read Support: No 00:09:10.251 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.251 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.251 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.251 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.251 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.251 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.251 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.251 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.251 ===================================================== 00:09:10.251 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:10.251 ===================================================== 00:09:10.251 Controller Capabilities/Features 00:09:10.251 ================================ 00:09:10.251 Vendor ID: 1b36 00:09:10.251 Subsystem Vendor ID: 1af4 00:09:10.251 Serial Number: 12342 00:09:10.251 Model Number: QEMU NVMe Ctrl 00:09:10.251 Firmware Version: 8.0.0 00:09:10.251 Recommended Arb Burst: 6 00:09:10.251 IEEE OUI Identifier: 00 54 52 00:09:10.251 Multi-path I/O 00:09:10.251 May have multiple subsystem ports: No 00:09:10.251 May have multiple controllers: No 00:09:10.251 Associated with SR-IOV VF: No 00:09:10.251 Max Data Transfer Size: 524288 00:09:10.251 Max Number of Namespaces: 256 00:09:10.251 Max Number of I/O Queues: 64 00:09:10.251 NVMe Specification Version (VS): 1.4 00:09:10.251 NVMe Specification Version (Identify): 1.4 00:09:10.251 Maximum Queue Entries: 2048 00:09:10.251 Contiguous Queues Required: Yes 00:09:10.251 Arbitration Mechanisms Supported 00:09:10.251 Weighted Round Robin: Not Supported 00:09:10.251 Vendor Specific: Not Supported 00:09:10.251 Reset Timeout: 7500 ms 00:09:10.251 Doorbell Stride: 4 bytes 00:09:10.251 NVM Subsystem Reset: Not Supported 00:09:10.251 Command Sets Supported 00:09:10.251 NVM Command Set: Supported 00:09:10.251 Boot Partition: Not Supported 00:09:10.251 Memory Page Size Minimum: 4096 bytes 00:09:10.251 Memory Page Size Maximum: 65536 bytes 00:09:10.251 Persistent Memory Region: Not Supported 00:09:10.251 Optional Asynchronous Events Supported 00:09:10.251 Namespace Attribute Notices: Supported 00:09:10.251 Firmware Activation Notices: Not Supported 00:09:10.251 ANA Change Notices: Not Supported 00:09:10.251 PLE Aggregate Log Change Notices: Not Supported 00:09:10.251 LBA Status Info Alert Notices: Not Supported 00:09:10.251 EGE Aggregate Log Change Notices: Not Supported 00:09:10.251 Normal NVM Subsystem Shutdown event: Not Supported 00:09:10.251 Zone Descriptor Change Notices: Not Supported 00:09:10.251 Discovery Log Change Notices: Not Supported 00:09:10.251 Controller Attributes 00:09:10.251 128-bit Host Identifier: Not Supported 00:09:10.251 Non-Operational Permissive Mode: Not Supported 00:09:10.251 NVM Sets: Not Supported 00:09:10.251 Read Recovery Levels: Not Supported 00:09:10.251 Endurance Groups: Not Supported 00:09:10.251 Predictable Latency Mode: Not Supported 00:09:10.251 Traffic Based Keep ALive: Not Supported 00:09:10.251 Namespace Granularity: Not Supported 00:09:10.251 SQ Associations: Not Supported 00:09:10.251 UUID List: Not Supported 00:09:10.251 Multi-Domain Subsystem: Not Supported 00:09:10.251 Fixed Capacity Management: Not Supported 00:09:10.251 Variable Capacity Management: Not Supported 00:09:10.251 Delete Endurance Group: Not Supported 00:09:10.251 Delete NVM Set: Not Supported 00:09:10.251 Extended LBA Formats Supported: Supported 00:09:10.251 Flexible Data Placement Supported: Not Supported 00:09:10.251 00:09:10.251 Controller Memory Buffer Support 00:09:10.251 ================================ 00:09:10.251 Supported: No 00:09:10.251 00:09:10.251 Persistent Memory Region Support 00:09:10.251 ================================ 00:09:10.251 Supported: No 00:09:10.251 00:09:10.251 Admin Command Set Attributes 00:09:10.251 ============================ 00:09:10.251 Security Send/Receive: Not Supported 00:09:10.251 Format NVM: Supported 00:09:10.251 Firmware Activate/Download: Not Supported 00:09:10.251 Namespace Management: Supported 00:09:10.251 Device Self-Test: Not Supported 00:09:10.251 Directives: Supported 00:09:10.251 NVMe-MI: Not Supported 00:09:10.251 Virtualization Management: Not Supported 00:09:10.251 Doorbell Buffer Config: Supported 00:09:10.251 Get LBA Status Capability: Not Supported 00:09:10.251 Command & Feature Lockdown Capability: Not Supported 00:09:10.251 Abort Command Limit: 4 00:09:10.251 Async Event Request Limit: 4 00:09:10.251 Number of Firmware Slots: N/A 00:09:10.251 Firmware Slot 1 Read-Only: N/A 00:09:10.251 Firmware Activation Without Reset: N/A 00:09:10.251 Multiple Update Detection Support: N/A 00:09:10.251 Firmware Update Granularity: No Information Provided 00:09:10.251 Per-Namespace SMART Log: Yes 00:09:10.251 Asymmetric Namespace Access Log Page: Not Supported 00:09:10.251 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:10.251 Command Effects Log Page: Supported 00:09:10.251 Get Log Page Extended Data: Supported 00:09:10.251 Telemetry Log Pages: Not Supported 00:09:10.251 Persistent Event Log Pages: Not Supported 00:09:10.251 Supported Log Pages Log Page: May Support 00:09:10.251 Commands Supported & Effects Log Page: Not Supported 00:09:10.252 Feature Identifiers & Effects Log Page:May Support 00:09:10.252 NVMe-MI Commands & Effects Log Page: May Support 00:09:10.252 Data Area 4 for Telemetry Log: Not Supported 00:09:10.252 Error Log Page Entries Supported: 1 00:09:10.252 Keep Alive: Not Supported 00:09:10.252 00:09:10.252 NVM Command Set Attributes 00:09:10.252 ========================== 00:09:10.252 Submission Queue Entry Size 00:09:10.252 Max: 64 00:09:10.252 Min: 64 00:09:10.252 Completion Queue Entry Size 00:09:10.252 Max: 16 00:09:10.252 Min: 16 00:09:10.252 Number of Namespaces: 256 00:09:10.252 Compare Command: Supported 00:09:10.252 Write Uncorrectable Command: Not Supported 00:09:10.252 Dataset Management Command: Supported 00:09:10.252 Write Zeroes Command: Supported 00:09:10.252 Set Features Save Field: Supported 00:09:10.252 Reservations: Not Supported 00:09:10.252 Timestamp: Supported 00:09:10.252 Copy: Supported 00:09:10.252 Volatile Write Cache: Present 00:09:10.252 Atomic Write Unit (Normal): 1 00:09:10.252 Atomic Write Unit (PFail): 1 00:09:10.252 Atomic Compare & Write Unit: 1 00:09:10.252 Fused Compare & Write: Not Supported 00:09:10.252 Scatter-Gather List 00:09:10.252 SGL Command Set: Supported 00:09:10.252 SGL Keyed: Not Supported 00:09:10.252 SGL Bit Bucket Descriptor: Not Supported 00:09:10.252 SGL Metadata Pointer: Not Supported 00:09:10.252 Oversized SGL: Not Supported 00:09:10.252 SGL Metadata Address: Not Supported 00:09:10.252 SGL Offset: Not Supported 00:09:10.252 Transport SGL Data Block: Not Supported 00:09:10.252 Replay Protected Memory Block: Not Supported 00:09:10.252 00:09:10.252 Firmware Slot Information 00:09:10.252 ========================= 00:09:10.252 Active slot: 1 00:09:10.252 Slot 1 Firmware Revision: 1.0 00:09:10.252 00:09:10.252 00:09:10.252 Commands Supported and Effects 00:09:10.252 ============================== 00:09:10.252 Admin Commands 00:09:10.252 -------------- 00:09:10.252 Delete I/O Submission Queue (00h): Supported 00:09:10.252 Create I/O Submission Queue (01h): Supported 00:09:10.252 Get Log Page (02h): Supported 00:09:10.252 Delete I/O Completion Queue (04h): Supported 00:09:10.252 Create I/O Completion Queue (05h): Supported 00:09:10.252 Identify (06h): Supported 00:09:10.252 Abort (08h): Supported 00:09:10.252 Set Features (09h): Supported 00:09:10.252 Get Features (0Ah): Supported 00:09:10.252 Asynchronous Event Request (0Ch): Supported 00:09:10.252 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:10.252 Directive Send (19h): Supported 00:09:10.252 Directive Receive (1Ah): Supported 00:09:10.252 Virtualization Management (1Ch): Supported 00:09:10.252 Doorbell Buffer Config (7Ch): Supported 00:09:10.252 Format NVM (80h): Supported LBA-Change 00:09:10.252 I/O Commands 00:09:10.252 ------------ 00:09:10.252 Flush (00h): Supported LBA-Change 00:09:10.252 Write (01h): Supported LBA-Change 00:09:10.252 Read (02h): Supported 00:09:10.252 Compare (05h): Supported 00:09:10.252 Write Zeroes (08h): Supported LBA-Change 00:09:10.252 Dataset Management (09h): Supported LBA-Change 00:09:10.252 Unknown (0Ch): Supported 00:09:10.252 Unknown (12h): Supported 00:09:10.252 Copy (19h): Supported LBA-Change 00:09:10.252 Unknown (1Dh): Supported LBA-Change 00:09:10.252 00:09:10.252 Error Log 00:09:10.252 ========= 00:09:10.252 00:09:10.252 Arbitration 00:09:10.252 =========== 00:09:10.252 Arbitration Burst: no limit 00:09:10.252 00:09:10.252 Power Management 00:09:10.252 ================ 00:09:10.252 Number of Power States: 1 00:09:10.252 Current Power State: Power State #0 00:09:10.252 Power State #0: 00:09:10.252 Max Power: 25.00 W 00:09:10.252 Non-Operational State: Operational 00:09:10.252 Entry Latency: 16 microseconds 00:09:10.252 Exit Latency: 4 microseconds 00:09:10.252 Relative Read Throughput: 0 00:09:10.252 Relative Read Latency: 0 00:09:10.252 Relative Write Throughput: 0 00:09:10.252 Relative Write Latency: 0 00:09:10.252 Idle Power: Not Reported 00:09:10.252 Active Power: Not Reported 00:09:10.252 Non-Operational Permissive Mode: Not Supported 00:09:10.252 00:09:10.252 Health Information 00:09:10.252 ================== 00:09:10.252 Critical Warnings: 00:09:10.252 Available Spare Space: OK 00:09:10.252 Temperature: OK 00:09:10.252 Device Reliability: OK 00:09:10.252 Read Only: No 00:09:10.252 Volatile Memory Backup: OK 00:09:10.252 Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.252 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:10.252 Available Spare: 0% 00:09:10.252 Available Spare Threshold: 0% 00:09:10.252 Life Percentage Used: 0% 00:09:10.252 Data Units Read: 2098 00:09:10.252 Data Units Written: 1886 00:09:10.252 Host Read Commands: 101354 00:09:10.252 Host Write Commands: 99623 00:09:10.252 Controller Busy Time: 0 minutes 00:09:10.252 Power Cycles: 0 00:09:10.252 Power On Hours: 0 hours 00:09:10.252 Unsafe Shutdowns: 0 00:09:10.252 Unrecoverable Media Errors: 0 00:09:10.252 Lifetime Error Log Entries: 0 00:09:10.252 Warning Temperature Time: 0 minutes 00:09:10.252 Critical Temperature Time: 0 minutes 00:09:10.252 00:09:10.252 Number of Queues 00:09:10.252 ================ 00:09:10.252 Number of I/O Submission Queues: 64 00:09:10.252 Number of I/O Completion Queues: 64 00:09:10.252 00:09:10.252 ZNS Specific Controller Data 00:09:10.252 ============================ 00:09:10.252 Zone Append Size Limit: 0 00:09:10.252 00:09:10.252 00:09:10.252 Active Namespaces 00:09:10.252 ================= 00:09:10.252 Namespace ID:1 00:09:10.252 Error Recovery Timeout: Unlimited 00:09:10.252 Command Set Identifier: NVM (00h) 00:09:10.252 Deallocate: Supported 00:09:10.252 Deallocated/Unwritten Error: Supported 00:09:10.252 Deallocated Read Value: All 0x00 00:09:10.252 Deallocate in Write Zeroes: Not Supported 00:09:10.252 Deallocated Guard Field: 0xFFFF 00:09:10.252 Flush: Supported 00:09:10.252 Reservation: Not Supported 00:09:10.252 Namespace Sharing Capabilities: Private 00:09:10.252 Size (in LBAs): 1048576 (4GiB) 00:09:10.252 Capacity (in LBAs): 1048576 (4GiB) 00:09:10.252 Utilization (in LBAs): 1048576 (4GiB) 00:09:10.252 Thin Provisioning: Not Supported 00:09:10.252 Per-NS Atomic Units: No 00:09:10.252 Maximum Single Source Range Length: 128 00:09:10.252 Maximum Copy Length: 128 00:09:10.252 Maximum Source Range Count: 128 00:09:10.252 NGUID/EUI64 Never Reused: No 00:09:10.252 Namespace Write Protected: No 00:09:10.252 Number of LBA Formats: 8 00:09:10.252 Current LBA Format: LBA Format #04 00:09:10.252 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:10.252 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:10.252 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:10.252 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:10.252 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:10.252 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:10.252 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:10.252 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:10.252 00:09:10.252 NVM Specific Namespace Data 00:09:10.252 =========================== 00:09:10.252 Logical Block Storage Tag Mask: 0 00:09:10.252 Protection Information Capabilities: 00:09:10.252 16b Guard Protection Information Storage Tag Support: No 00:09:10.252 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:10.252 Storage Tag Check Read Support: No 00:09:10.252 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.252 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.252 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.252 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.252 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.252 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.252 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.252 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.252 Namespace ID:2 00:09:10.252 Error Recovery Timeout: Unlimited 00:09:10.252 Command Set Identifier: NVM (00h) 00:09:10.252 Deallocate: Supported 00:09:10.252 Deallocated/Unwritten Error: Supported 00:09:10.252 Deallocated Read Value: All 0x00 00:09:10.252 Deallocate in Write Zeroes: Not Supported 00:09:10.252 Deallocated Guard Field: 0xFFFF 00:09:10.252 Flush: Supported 00:09:10.252 Reservation: Not Supported 00:09:10.252 Namespace Sharing Capabilities: Private 00:09:10.252 Size (in LBAs): 1048576 (4GiB) 00:09:10.252 Capacity (in LBAs): 1048576 (4GiB) 00:09:10.252 Utilization (in LBAs): 1048576 (4GiB) 00:09:10.252 Thin Provisioning: Not Supported 00:09:10.252 Per-NS Atomic Units: No 00:09:10.252 Maximum Single Source Range Length: 128 00:09:10.252 Maximum Copy Length: 128 00:09:10.252 Maximum Source Range Count: 128 00:09:10.252 NGUID/EUI64 Never Reused: No 00:09:10.252 Namespace Write Protected: No 00:09:10.252 Number of LBA Formats: 8 00:09:10.252 Current LBA Format: LBA Format #04 00:09:10.252 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:10.252 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:10.253 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:10.253 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:10.253 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:10.253 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:10.253 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:10.253 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:10.253 00:09:10.253 NVM Specific Namespace Data 00:09:10.253 =========================== 00:09:10.253 Logical Block Storage Tag Mask: 0 00:09:10.253 Protection Information Capabilities: 00:09:10.253 16b Guard Protection Information Storage Tag Support: No 00:09:10.253 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:10.253 Storage Tag Check Read Support: No 00:09:10.253 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Namespace ID:3 00:09:10.253 Error Recovery Timeout: Unlimited 00:09:10.253 Command Set Identifier: NVM (00h) 00:09:10.253 Deallocate: Supported 00:09:10.253 Deallocated/Unwritten Error: Supported 00:09:10.253 Deallocated Read Value: All 0x00 00:09:10.253 Deallocate in Write Zeroes: Not Supported 00:09:10.253 Deallocated Guard Field: 0xFFFF 00:09:10.253 Flush: Supported 00:09:10.253 Reservation: Not Supported 00:09:10.253 Namespace Sharing Capabilities: Private 00:09:10.253 Size (in LBAs): 1048576 (4GiB) 00:09:10.253 Capacity (in LBAs): 1048576 (4GiB) 00:09:10.253 Utilization (in LBAs): 1048576 (4GiB) 00:09:10.253 Thin Provisioning: Not Supported 00:09:10.253 Per-NS Atomic Units: No 00:09:10.253 Maximum Single Source Range Length: 128 00:09:10.253 Maximum Copy Length: 128 00:09:10.253 Maximum Source Range Count: 128 00:09:10.253 NGUID/EUI64 Never Reused: No 00:09:10.253 Namespace Write Protected: No 00:09:10.253 Number of LBA Formats: 8 00:09:10.253 Current LBA Format: LBA Format #04 00:09:10.253 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:10.253 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:10.253 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:10.253 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:10.253 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:10.253 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:10.253 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:10.253 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:10.253 00:09:10.253 NVM Specific Namespace Data 00:09:10.253 =========================== 00:09:10.253 Logical Block Storage Tag Mask: 0 00:09:10.253 Protection Information Capabilities: 00:09:10.253 16b Guard Protection Information Storage Tag Support: No 00:09:10.253 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:10.253 Storage Tag Check Read Support: No 00:09:10.253 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.253 17:59:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:10.253 17:59:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:09:10.511 ===================================================== 00:09:10.511 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:10.511 ===================================================== 00:09:10.511 Controller Capabilities/Features 00:09:10.511 ================================ 00:09:10.511 Vendor ID: 1b36 00:09:10.511 Subsystem Vendor ID: 1af4 00:09:10.511 Serial Number: 12340 00:09:10.511 Model Number: QEMU NVMe Ctrl 00:09:10.511 Firmware Version: 8.0.0 00:09:10.511 Recommended Arb Burst: 6 00:09:10.511 IEEE OUI Identifier: 00 54 52 00:09:10.511 Multi-path I/O 00:09:10.511 May have multiple subsystem ports: No 00:09:10.511 May have multiple controllers: No 00:09:10.511 Associated with SR-IOV VF: No 00:09:10.511 Max Data Transfer Size: 524288 00:09:10.511 Max Number of Namespaces: 256 00:09:10.511 Max Number of I/O Queues: 64 00:09:10.511 NVMe Specification Version (VS): 1.4 00:09:10.511 NVMe Specification Version (Identify): 1.4 00:09:10.511 Maximum Queue Entries: 2048 00:09:10.511 Contiguous Queues Required: Yes 00:09:10.511 Arbitration Mechanisms Supported 00:09:10.511 Weighted Round Robin: Not Supported 00:09:10.511 Vendor Specific: Not Supported 00:09:10.511 Reset Timeout: 7500 ms 00:09:10.511 Doorbell Stride: 4 bytes 00:09:10.511 NVM Subsystem Reset: Not Supported 00:09:10.511 Command Sets Supported 00:09:10.511 NVM Command Set: Supported 00:09:10.511 Boot Partition: Not Supported 00:09:10.511 Memory Page Size Minimum: 4096 bytes 00:09:10.511 Memory Page Size Maximum: 65536 bytes 00:09:10.511 Persistent Memory Region: Not Supported 00:09:10.511 Optional Asynchronous Events Supported 00:09:10.511 Namespace Attribute Notices: Supported 00:09:10.511 Firmware Activation Notices: Not Supported 00:09:10.511 ANA Change Notices: Not Supported 00:09:10.511 PLE Aggregate Log Change Notices: Not Supported 00:09:10.511 LBA Status Info Alert Notices: Not Supported 00:09:10.511 EGE Aggregate Log Change Notices: Not Supported 00:09:10.511 Normal NVM Subsystem Shutdown event: Not Supported 00:09:10.511 Zone Descriptor Change Notices: Not Supported 00:09:10.511 Discovery Log Change Notices: Not Supported 00:09:10.511 Controller Attributes 00:09:10.511 128-bit Host Identifier: Not Supported 00:09:10.511 Non-Operational Permissive Mode: Not Supported 00:09:10.511 NVM Sets: Not Supported 00:09:10.511 Read Recovery Levels: Not Supported 00:09:10.511 Endurance Groups: Not Supported 00:09:10.511 Predictable Latency Mode: Not Supported 00:09:10.511 Traffic Based Keep ALive: Not Supported 00:09:10.511 Namespace Granularity: Not Supported 00:09:10.511 SQ Associations: Not Supported 00:09:10.511 UUID List: Not Supported 00:09:10.511 Multi-Domain Subsystem: Not Supported 00:09:10.511 Fixed Capacity Management: Not Supported 00:09:10.511 Variable Capacity Management: Not Supported 00:09:10.511 Delete Endurance Group: Not Supported 00:09:10.511 Delete NVM Set: Not Supported 00:09:10.511 Extended LBA Formats Supported: Supported 00:09:10.511 Flexible Data Placement Supported: Not Supported 00:09:10.511 00:09:10.511 Controller Memory Buffer Support 00:09:10.511 ================================ 00:09:10.511 Supported: No 00:09:10.511 00:09:10.511 Persistent Memory Region Support 00:09:10.511 ================================ 00:09:10.511 Supported: No 00:09:10.511 00:09:10.511 Admin Command Set Attributes 00:09:10.511 ============================ 00:09:10.511 Security Send/Receive: Not Supported 00:09:10.511 Format NVM: Supported 00:09:10.511 Firmware Activate/Download: Not Supported 00:09:10.511 Namespace Management: Supported 00:09:10.511 Device Self-Test: Not Supported 00:09:10.511 Directives: Supported 00:09:10.511 NVMe-MI: Not Supported 00:09:10.511 Virtualization Management: Not Supported 00:09:10.511 Doorbell Buffer Config: Supported 00:09:10.511 Get LBA Status Capability: Not Supported 00:09:10.511 Command & Feature Lockdown Capability: Not Supported 00:09:10.511 Abort Command Limit: 4 00:09:10.511 Async Event Request Limit: 4 00:09:10.511 Number of Firmware Slots: N/A 00:09:10.511 Firmware Slot 1 Read-Only: N/A 00:09:10.512 Firmware Activation Without Reset: N/A 00:09:10.512 Multiple Update Detection Support: N/A 00:09:10.512 Firmware Update Granularity: No Information Provided 00:09:10.512 Per-Namespace SMART Log: Yes 00:09:10.512 Asymmetric Namespace Access Log Page: Not Supported 00:09:10.512 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:09:10.512 Command Effects Log Page: Supported 00:09:10.512 Get Log Page Extended Data: Supported 00:09:10.512 Telemetry Log Pages: Not Supported 00:09:10.512 Persistent Event Log Pages: Not Supported 00:09:10.512 Supported Log Pages Log Page: May Support 00:09:10.512 Commands Supported & Effects Log Page: Not Supported 00:09:10.512 Feature Identifiers & Effects Log Page:May Support 00:09:10.512 NVMe-MI Commands & Effects Log Page: May Support 00:09:10.512 Data Area 4 for Telemetry Log: Not Supported 00:09:10.512 Error Log Page Entries Supported: 1 00:09:10.512 Keep Alive: Not Supported 00:09:10.512 00:09:10.512 NVM Command Set Attributes 00:09:10.512 ========================== 00:09:10.512 Submission Queue Entry Size 00:09:10.512 Max: 64 00:09:10.512 Min: 64 00:09:10.512 Completion Queue Entry Size 00:09:10.512 Max: 16 00:09:10.512 Min: 16 00:09:10.512 Number of Namespaces: 256 00:09:10.512 Compare Command: Supported 00:09:10.512 Write Uncorrectable Command: Not Supported 00:09:10.512 Dataset Management Command: Supported 00:09:10.512 Write Zeroes Command: Supported 00:09:10.512 Set Features Save Field: Supported 00:09:10.512 Reservations: Not Supported 00:09:10.512 Timestamp: Supported 00:09:10.512 Copy: Supported 00:09:10.512 Volatile Write Cache: Present 00:09:10.512 Atomic Write Unit (Normal): 1 00:09:10.512 Atomic Write Unit (PFail): 1 00:09:10.512 Atomic Compare & Write Unit: 1 00:09:10.512 Fused Compare & Write: Not Supported 00:09:10.512 Scatter-Gather List 00:09:10.512 SGL Command Set: Supported 00:09:10.512 SGL Keyed: Not Supported 00:09:10.512 SGL Bit Bucket Descriptor: Not Supported 00:09:10.512 SGL Metadata Pointer: Not Supported 00:09:10.512 Oversized SGL: Not Supported 00:09:10.512 SGL Metadata Address: Not Supported 00:09:10.512 SGL Offset: Not Supported 00:09:10.512 Transport SGL Data Block: Not Supported 00:09:10.512 Replay Protected Memory Block: Not Supported 00:09:10.512 00:09:10.512 Firmware Slot Information 00:09:10.512 ========================= 00:09:10.512 Active slot: 1 00:09:10.512 Slot 1 Firmware Revision: 1.0 00:09:10.512 00:09:10.512 00:09:10.512 Commands Supported and Effects 00:09:10.512 ============================== 00:09:10.512 Admin Commands 00:09:10.512 -------------- 00:09:10.512 Delete I/O Submission Queue (00h): Supported 00:09:10.512 Create I/O Submission Queue (01h): Supported 00:09:10.512 Get Log Page (02h): Supported 00:09:10.512 Delete I/O Completion Queue (04h): Supported 00:09:10.512 Create I/O Completion Queue (05h): Supported 00:09:10.512 Identify (06h): Supported 00:09:10.512 Abort (08h): Supported 00:09:10.512 Set Features (09h): Supported 00:09:10.512 Get Features (0Ah): Supported 00:09:10.512 Asynchronous Event Request (0Ch): Supported 00:09:10.512 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:10.512 Directive Send (19h): Supported 00:09:10.512 Directive Receive (1Ah): Supported 00:09:10.512 Virtualization Management (1Ch): Supported 00:09:10.512 Doorbell Buffer Config (7Ch): Supported 00:09:10.512 Format NVM (80h): Supported LBA-Change 00:09:10.512 I/O Commands 00:09:10.512 ------------ 00:09:10.512 Flush (00h): Supported LBA-Change 00:09:10.512 Write (01h): Supported LBA-Change 00:09:10.512 Read (02h): Supported 00:09:10.512 Compare (05h): Supported 00:09:10.512 Write Zeroes (08h): Supported LBA-Change 00:09:10.512 Dataset Management (09h): Supported LBA-Change 00:09:10.512 Unknown (0Ch): Supported 00:09:10.512 Unknown (12h): Supported 00:09:10.512 Copy (19h): Supported LBA-Change 00:09:10.512 Unknown (1Dh): Supported LBA-Change 00:09:10.512 00:09:10.512 Error Log 00:09:10.512 ========= 00:09:10.512 00:09:10.512 Arbitration 00:09:10.512 =========== 00:09:10.512 Arbitration Burst: no limit 00:09:10.512 00:09:10.512 Power Management 00:09:10.512 ================ 00:09:10.512 Number of Power States: 1 00:09:10.512 Current Power State: Power State #0 00:09:10.512 Power State #0: 00:09:10.512 Max Power: 25.00 W 00:09:10.512 Non-Operational State: Operational 00:09:10.512 Entry Latency: 16 microseconds 00:09:10.512 Exit Latency: 4 microseconds 00:09:10.512 Relative Read Throughput: 0 00:09:10.512 Relative Read Latency: 0 00:09:10.512 Relative Write Throughput: 0 00:09:10.512 Relative Write Latency: 0 00:09:10.512 Idle Power: Not Reported 00:09:10.512 Active Power: Not Reported 00:09:10.512 Non-Operational Permissive Mode: Not Supported 00:09:10.512 00:09:10.512 Health Information 00:09:10.512 ================== 00:09:10.512 Critical Warnings: 00:09:10.512 Available Spare Space: OK 00:09:10.512 Temperature: OK 00:09:10.512 Device Reliability: OK 00:09:10.512 Read Only: No 00:09:10.512 Volatile Memory Backup: OK 00:09:10.512 Current Temperature: 323 Kelvin (50 Celsius) 00:09:10.512 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:10.512 Available Spare: 0% 00:09:10.512 Available Spare Threshold: 0% 00:09:10.512 Life Percentage Used: 0% 00:09:10.512 Data Units Read: 658 00:09:10.512 Data Units Written: 586 00:09:10.512 Host Read Commands: 33117 00:09:10.512 Host Write Commands: 32903 00:09:10.512 Controller Busy Time: 0 minutes 00:09:10.512 Power Cycles: 0 00:09:10.512 Power On Hours: 0 hours 00:09:10.512 Unsafe Shutdowns: 0 00:09:10.512 Unrecoverable Media Errors: 0 00:09:10.512 Lifetime Error Log Entries: 0 00:09:10.512 Warning Temperature Time: 0 minutes 00:09:10.512 Critical Temperature Time: 0 minutes 00:09:10.512 00:09:10.512 Number of Queues 00:09:10.512 ================ 00:09:10.512 Number of I/O Submission Queues: 64 00:09:10.512 Number of I/O Completion Queues: 64 00:09:10.512 00:09:10.512 ZNS Specific Controller Data 00:09:10.512 ============================ 00:09:10.512 Zone Append Size Limit: 0 00:09:10.512 00:09:10.512 00:09:10.512 Active Namespaces 00:09:10.512 ================= 00:09:10.512 Namespace ID:1 00:09:10.512 Error Recovery Timeout: Unlimited 00:09:10.512 Command Set Identifier: NVM (00h) 00:09:10.512 Deallocate: Supported 00:09:10.512 Deallocated/Unwritten Error: Supported 00:09:10.512 Deallocated Read Value: All 0x00 00:09:10.512 Deallocate in Write Zeroes: Not Supported 00:09:10.512 Deallocated Guard Field: 0xFFFF 00:09:10.512 Flush: Supported 00:09:10.512 Reservation: Not Supported 00:09:10.512 Metadata Transferred as: Separate Metadata Buffer 00:09:10.512 Namespace Sharing Capabilities: Private 00:09:10.512 Size (in LBAs): 1548666 (5GiB) 00:09:10.512 Capacity (in LBAs): 1548666 (5GiB) 00:09:10.512 Utilization (in LBAs): 1548666 (5GiB) 00:09:10.512 Thin Provisioning: Not Supported 00:09:10.512 Per-NS Atomic Units: No 00:09:10.512 Maximum Single Source Range Length: 128 00:09:10.512 Maximum Copy Length: 128 00:09:10.512 Maximum Source Range Count: 128 00:09:10.512 NGUID/EUI64 Never Reused: No 00:09:10.512 Namespace Write Protected: No 00:09:10.512 Number of LBA Formats: 8 00:09:10.512 Current LBA Format: LBA Format #07 00:09:10.512 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:10.512 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:10.512 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:10.512 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:10.512 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:10.512 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:10.512 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:10.512 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:10.512 00:09:10.512 NVM Specific Namespace Data 00:09:10.512 =========================== 00:09:10.512 Logical Block Storage Tag Mask: 0 00:09:10.512 Protection Information Capabilities: 00:09:10.512 16b Guard Protection Information Storage Tag Support: No 00:09:10.512 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:10.512 Storage Tag Check Read Support: No 00:09:10.513 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.513 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.513 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.513 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.513 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.513 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.513 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.513 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:10.513 17:59:26 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:10.513 17:59:26 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:09:10.770 ===================================================== 00:09:10.770 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:10.770 ===================================================== 00:09:10.770 Controller Capabilities/Features 00:09:10.770 ================================ 00:09:10.770 Vendor ID: 1b36 00:09:10.770 Subsystem Vendor ID: 1af4 00:09:10.770 Serial Number: 12341 00:09:10.770 Model Number: QEMU NVMe Ctrl 00:09:10.770 Firmware Version: 8.0.0 00:09:10.770 Recommended Arb Burst: 6 00:09:10.770 IEEE OUI Identifier: 00 54 52 00:09:10.770 Multi-path I/O 00:09:10.770 May have multiple subsystem ports: No 00:09:10.770 May have multiple controllers: No 00:09:10.770 Associated with SR-IOV VF: No 00:09:10.770 Max Data Transfer Size: 524288 00:09:10.770 Max Number of Namespaces: 256 00:09:10.770 Max Number of I/O Queues: 64 00:09:10.770 NVMe Specification Version (VS): 1.4 00:09:10.770 NVMe Specification Version (Identify): 1.4 00:09:10.770 Maximum Queue Entries: 2048 00:09:10.770 Contiguous Queues Required: Yes 00:09:10.770 Arbitration Mechanisms Supported 00:09:10.770 Weighted Round Robin: Not Supported 00:09:10.770 Vendor Specific: Not Supported 00:09:10.770 Reset Timeout: 7500 ms 00:09:10.770 Doorbell Stride: 4 bytes 00:09:10.770 NVM Subsystem Reset: Not Supported 00:09:10.770 Command Sets Supported 00:09:10.770 NVM Command Set: Supported 00:09:10.770 Boot Partition: Not Supported 00:09:10.770 Memory Page Size Minimum: 4096 bytes 00:09:10.770 Memory Page Size Maximum: 65536 bytes 00:09:10.770 Persistent Memory Region: Not Supported 00:09:10.770 Optional Asynchronous Events Supported 00:09:10.770 Namespace Attribute Notices: Supported 00:09:10.770 Firmware Activation Notices: Not Supported 00:09:10.770 ANA Change Notices: Not Supported 00:09:10.770 PLE Aggregate Log Change Notices: Not Supported 00:09:10.770 LBA Status Info Alert Notices: Not Supported 00:09:10.770 EGE Aggregate Log Change Notices: Not Supported 00:09:10.770 Normal NVM Subsystem Shutdown event: Not Supported 00:09:10.770 Zone Descriptor Change Notices: Not Supported 00:09:10.770 Discovery Log Change Notices: Not Supported 00:09:10.770 Controller Attributes 00:09:10.770 128-bit Host Identifier: Not Supported 00:09:10.770 Non-Operational Permissive Mode: Not Supported 00:09:10.770 NVM Sets: Not Supported 00:09:10.770 Read Recovery Levels: Not Supported 00:09:10.770 Endurance Groups: Not Supported 00:09:10.770 Predictable Latency Mode: Not Supported 00:09:10.770 Traffic Based Keep ALive: Not Supported 00:09:10.770 Namespace Granularity: Not Supported 00:09:10.770 SQ Associations: Not Supported 00:09:10.770 UUID List: Not Supported 00:09:10.770 Multi-Domain Subsystem: Not Supported 00:09:10.771 Fixed Capacity Management: Not Supported 00:09:10.771 Variable Capacity Management: Not Supported 00:09:10.771 Delete Endurance Group: Not Supported 00:09:10.771 Delete NVM Set: Not Supported 00:09:10.771 Extended LBA Formats Supported: Supported 00:09:10.771 Flexible Data Placement Supported: Not Supported 00:09:10.771 00:09:10.771 Controller Memory Buffer Support 00:09:10.771 ================================ 00:09:10.771 Supported: No 00:09:10.771 00:09:10.771 Persistent Memory Region Support 00:09:10.771 ================================ 00:09:10.771 Supported: No 00:09:10.771 00:09:10.771 Admin Command Set Attributes 00:09:10.771 ============================ 00:09:10.771 Security Send/Receive: Not Supported 00:09:10.771 Format NVM: Supported 00:09:10.771 Firmware Activate/Download: Not Supported 00:09:10.771 Namespace Management: Supported 00:09:10.771 Device Self-Test: Not Supported 00:09:10.771 Directives: Supported 00:09:10.771 NVMe-MI: Not Supported 00:09:10.771 Virtualization Management: Not Supported 00:09:10.771 Doorbell Buffer Config: Supported 00:09:10.771 Get LBA Status Capability: Not Supported 00:09:10.771 Command & Feature Lockdown Capability: Not Supported 00:09:10.771 Abort Command Limit: 4 00:09:10.771 Async Event Request Limit: 4 00:09:10.771 Number of Firmware Slots: N/A 00:09:10.771 Firmware Slot 1 Read-Only: N/A 00:09:10.771 Firmware Activation Without Reset: N/A 00:09:10.771 Multiple Update Detection Support: N/A 00:09:10.771 Firmware Update Granularity: No Information Provided 00:09:10.771 Per-Namespace SMART Log: Yes 00:09:10.771 Asymmetric Namespace Access Log Page: Not Supported 00:09:10.771 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:09:10.771 Command Effects Log Page: Supported 00:09:10.771 Get Log Page Extended Data: Supported 00:09:10.771 Telemetry Log Pages: Not Supported 00:09:10.771 Persistent Event Log Pages: Not Supported 00:09:10.771 Supported Log Pages Log Page: May Support 00:09:10.771 Commands Supported & Effects Log Page: Not Supported 00:09:10.771 Feature Identifiers & Effects Log Page:May Support 00:09:10.771 NVMe-MI Commands & Effects Log Page: May Support 00:09:10.771 Data Area 4 for Telemetry Log: Not Supported 00:09:10.771 Error Log Page Entries Supported: 1 00:09:10.771 Keep Alive: Not Supported 00:09:10.771 00:09:10.771 NVM Command Set Attributes 00:09:10.771 ========================== 00:09:10.771 Submission Queue Entry Size 00:09:10.771 Max: 64 00:09:10.771 Min: 64 00:09:10.771 Completion Queue Entry Size 00:09:10.771 Max: 16 00:09:10.771 Min: 16 00:09:10.771 Number of Namespaces: 256 00:09:10.771 Compare Command: Supported 00:09:10.771 Write Uncorrectable Command: Not Supported 00:09:10.771 Dataset Management Command: Supported 00:09:10.771 Write Zeroes Command: Supported 00:09:10.771 Set Features Save Field: Supported 00:09:10.771 Reservations: Not Supported 00:09:10.771 Timestamp: Supported 00:09:10.771 Copy: Supported 00:09:10.771 Volatile Write Cache: Present 00:09:10.771 Atomic Write Unit (Normal): 1 00:09:10.771 Atomic Write Unit (PFail): 1 00:09:10.771 Atomic Compare & Write Unit: 1 00:09:10.771 Fused Compare & Write: Not Supported 00:09:10.771 Scatter-Gather List 00:09:10.771 SGL Command Set: Supported 00:09:10.771 SGL Keyed: Not Supported 00:09:10.771 SGL Bit Bucket Descriptor: Not Supported 00:09:10.771 SGL Metadata Pointer: Not Supported 00:09:10.771 Oversized SGL: Not Supported 00:09:10.771 SGL Metadata Address: Not Supported 00:09:10.771 SGL Offset: Not Supported 00:09:10.771 Transport SGL Data Block: Not Supported 00:09:10.771 Replay Protected Memory Block: Not Supported 00:09:10.771 00:09:10.771 Firmware Slot Information 00:09:10.771 ========================= 00:09:10.771 Active slot: 1 00:09:10.771 Slot 1 Firmware Revision: 1.0 00:09:10.771 00:09:10.771 00:09:10.771 Commands Supported and Effects 00:09:10.771 ============================== 00:09:10.771 Admin Commands 00:09:10.771 -------------- 00:09:10.771 Delete I/O Submission Queue (00h): Supported 00:09:10.771 Create I/O Submission Queue (01h): Supported 00:09:10.771 Get Log Page (02h): Supported 00:09:10.771 Delete I/O Completion Queue (04h): Supported 00:09:10.771 Create I/O Completion Queue (05h): Supported 00:09:10.771 Identify (06h): Supported 00:09:10.771 Abort (08h): Supported 00:09:10.771 Set Features (09h): Supported 00:09:10.771 Get Features (0Ah): Supported 00:09:10.771 Asynchronous Event Request (0Ch): Supported 00:09:10.771 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:10.771 Directive Send (19h): Supported 00:09:10.771 Directive Receive (1Ah): Supported 00:09:10.771 Virtualization Management (1Ch): Supported 00:09:10.771 Doorbell Buffer Config (7Ch): Supported 00:09:10.771 Format NVM (80h): Supported LBA-Change 00:09:10.771 I/O Commands 00:09:10.771 ------------ 00:09:10.771 Flush (00h): Supported LBA-Change 00:09:10.771 Write (01h): Supported LBA-Change 00:09:10.771 Read (02h): Supported 00:09:10.771 Compare (05h): Supported 00:09:10.771 Write Zeroes (08h): Supported LBA-Change 00:09:10.771 Dataset Management (09h): Supported LBA-Change 00:09:10.771 Unknown (0Ch): Supported 00:09:10.771 Unknown (12h): Supported 00:09:10.771 Copy (19h): Supported LBA-Change 00:09:10.771 Unknown (1Dh): Supported LBA-Change 00:09:10.771 00:09:10.771 Error Log 00:09:10.771 ========= 00:09:10.771 00:09:10.771 Arbitration 00:09:10.771 =========== 00:09:10.771 Arbitration Burst: no limit 00:09:10.771 00:09:10.771 Power Management 00:09:10.771 ================ 00:09:10.771 Number of Power States: 1 00:09:10.771 Current Power State: Power State #0 00:09:10.771 Power State #0: 00:09:10.771 Max Power: 25.00 W 00:09:10.771 Non-Operational State: Operational 00:09:10.771 Entry Latency: 16 microseconds 00:09:10.771 Exit Latency: 4 microseconds 00:09:10.771 Relative Read Throughput: 0 00:09:10.771 Relative Read Latency: 0 00:09:10.771 Relative Write Throughput: 0 00:09:10.771 Relative Write Latency: 0 00:09:11.029 Idle Power: Not Reported 00:09:11.029 Active Power: Not Reported 00:09:11.029 Non-Operational Permissive Mode: Not Supported 00:09:11.029 00:09:11.029 Health Information 00:09:11.029 ================== 00:09:11.029 Critical Warnings: 00:09:11.029 Available Spare Space: OK 00:09:11.029 Temperature: OK 00:09:11.029 Device Reliability: OK 00:09:11.029 Read Only: No 00:09:11.029 Volatile Memory Backup: OK 00:09:11.029 Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.029 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:11.029 Available Spare: 0% 00:09:11.029 Available Spare Threshold: 0% 00:09:11.029 Life Percentage Used: 0% 00:09:11.029 Data Units Read: 950 00:09:11.029 Data Units Written: 816 00:09:11.029 Host Read Commands: 48635 00:09:11.029 Host Write Commands: 47427 00:09:11.029 Controller Busy Time: 0 minutes 00:09:11.029 Power Cycles: 0 00:09:11.029 Power On Hours: 0 hours 00:09:11.029 Unsafe Shutdowns: 0 00:09:11.029 Unrecoverable Media Errors: 0 00:09:11.029 Lifetime Error Log Entries: 0 00:09:11.029 Warning Temperature Time: 0 minutes 00:09:11.029 Critical Temperature Time: 0 minutes 00:09:11.029 00:09:11.029 Number of Queues 00:09:11.029 ================ 00:09:11.029 Number of I/O Submission Queues: 64 00:09:11.029 Number of I/O Completion Queues: 64 00:09:11.029 00:09:11.029 ZNS Specific Controller Data 00:09:11.029 ============================ 00:09:11.029 Zone Append Size Limit: 0 00:09:11.029 00:09:11.029 00:09:11.029 Active Namespaces 00:09:11.029 ================= 00:09:11.029 Namespace ID:1 00:09:11.029 Error Recovery Timeout: Unlimited 00:09:11.029 Command Set Identifier: NVM (00h) 00:09:11.029 Deallocate: Supported 00:09:11.029 Deallocated/Unwritten Error: Supported 00:09:11.029 Deallocated Read Value: All 0x00 00:09:11.029 Deallocate in Write Zeroes: Not Supported 00:09:11.029 Deallocated Guard Field: 0xFFFF 00:09:11.029 Flush: Supported 00:09:11.029 Reservation: Not Supported 00:09:11.029 Namespace Sharing Capabilities: Private 00:09:11.029 Size (in LBAs): 1310720 (5GiB) 00:09:11.029 Capacity (in LBAs): 1310720 (5GiB) 00:09:11.029 Utilization (in LBAs): 1310720 (5GiB) 00:09:11.029 Thin Provisioning: Not Supported 00:09:11.029 Per-NS Atomic Units: No 00:09:11.029 Maximum Single Source Range Length: 128 00:09:11.029 Maximum Copy Length: 128 00:09:11.029 Maximum Source Range Count: 128 00:09:11.029 NGUID/EUI64 Never Reused: No 00:09:11.029 Namespace Write Protected: No 00:09:11.029 Number of LBA Formats: 8 00:09:11.029 Current LBA Format: LBA Format #04 00:09:11.029 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.029 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.029 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.029 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.029 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.029 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.029 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.029 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.029 00:09:11.029 NVM Specific Namespace Data 00:09:11.029 =========================== 00:09:11.029 Logical Block Storage Tag Mask: 0 00:09:11.029 Protection Information Capabilities: 00:09:11.029 16b Guard Protection Information Storage Tag Support: No 00:09:11.029 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:11.029 Storage Tag Check Read Support: No 00:09:11.029 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.029 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.029 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.029 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.029 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.029 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.029 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.029 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.029 17:59:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:11.029 17:59:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:09:11.287 ===================================================== 00:09:11.287 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:11.287 ===================================================== 00:09:11.287 Controller Capabilities/Features 00:09:11.287 ================================ 00:09:11.287 Vendor ID: 1b36 00:09:11.287 Subsystem Vendor ID: 1af4 00:09:11.287 Serial Number: 12342 00:09:11.287 Model Number: QEMU NVMe Ctrl 00:09:11.287 Firmware Version: 8.0.0 00:09:11.287 Recommended Arb Burst: 6 00:09:11.287 IEEE OUI Identifier: 00 54 52 00:09:11.287 Multi-path I/O 00:09:11.287 May have multiple subsystem ports: No 00:09:11.287 May have multiple controllers: No 00:09:11.287 Associated with SR-IOV VF: No 00:09:11.287 Max Data Transfer Size: 524288 00:09:11.287 Max Number of Namespaces: 256 00:09:11.287 Max Number of I/O Queues: 64 00:09:11.288 NVMe Specification Version (VS): 1.4 00:09:11.288 NVMe Specification Version (Identify): 1.4 00:09:11.288 Maximum Queue Entries: 2048 00:09:11.288 Contiguous Queues Required: Yes 00:09:11.288 Arbitration Mechanisms Supported 00:09:11.288 Weighted Round Robin: Not Supported 00:09:11.288 Vendor Specific: Not Supported 00:09:11.288 Reset Timeout: 7500 ms 00:09:11.288 Doorbell Stride: 4 bytes 00:09:11.288 NVM Subsystem Reset: Not Supported 00:09:11.288 Command Sets Supported 00:09:11.288 NVM Command Set: Supported 00:09:11.288 Boot Partition: Not Supported 00:09:11.288 Memory Page Size Minimum: 4096 bytes 00:09:11.288 Memory Page Size Maximum: 65536 bytes 00:09:11.288 Persistent Memory Region: Not Supported 00:09:11.288 Optional Asynchronous Events Supported 00:09:11.288 Namespace Attribute Notices: Supported 00:09:11.288 Firmware Activation Notices: Not Supported 00:09:11.288 ANA Change Notices: Not Supported 00:09:11.288 PLE Aggregate Log Change Notices: Not Supported 00:09:11.288 LBA Status Info Alert Notices: Not Supported 00:09:11.288 EGE Aggregate Log Change Notices: Not Supported 00:09:11.288 Normal NVM Subsystem Shutdown event: Not Supported 00:09:11.288 Zone Descriptor Change Notices: Not Supported 00:09:11.288 Discovery Log Change Notices: Not Supported 00:09:11.288 Controller Attributes 00:09:11.288 128-bit Host Identifier: Not Supported 00:09:11.288 Non-Operational Permissive Mode: Not Supported 00:09:11.288 NVM Sets: Not Supported 00:09:11.288 Read Recovery Levels: Not Supported 00:09:11.288 Endurance Groups: Not Supported 00:09:11.288 Predictable Latency Mode: Not Supported 00:09:11.288 Traffic Based Keep ALive: Not Supported 00:09:11.288 Namespace Granularity: Not Supported 00:09:11.288 SQ Associations: Not Supported 00:09:11.288 UUID List: Not Supported 00:09:11.288 Multi-Domain Subsystem: Not Supported 00:09:11.288 Fixed Capacity Management: Not Supported 00:09:11.288 Variable Capacity Management: Not Supported 00:09:11.288 Delete Endurance Group: Not Supported 00:09:11.288 Delete NVM Set: Not Supported 00:09:11.288 Extended LBA Formats Supported: Supported 00:09:11.288 Flexible Data Placement Supported: Not Supported 00:09:11.288 00:09:11.288 Controller Memory Buffer Support 00:09:11.288 ================================ 00:09:11.288 Supported: No 00:09:11.288 00:09:11.288 Persistent Memory Region Support 00:09:11.288 ================================ 00:09:11.288 Supported: No 00:09:11.288 00:09:11.288 Admin Command Set Attributes 00:09:11.288 ============================ 00:09:11.288 Security Send/Receive: Not Supported 00:09:11.288 Format NVM: Supported 00:09:11.288 Firmware Activate/Download: Not Supported 00:09:11.288 Namespace Management: Supported 00:09:11.288 Device Self-Test: Not Supported 00:09:11.288 Directives: Supported 00:09:11.288 NVMe-MI: Not Supported 00:09:11.288 Virtualization Management: Not Supported 00:09:11.288 Doorbell Buffer Config: Supported 00:09:11.288 Get LBA Status Capability: Not Supported 00:09:11.288 Command & Feature Lockdown Capability: Not Supported 00:09:11.288 Abort Command Limit: 4 00:09:11.288 Async Event Request Limit: 4 00:09:11.288 Number of Firmware Slots: N/A 00:09:11.288 Firmware Slot 1 Read-Only: N/A 00:09:11.288 Firmware Activation Without Reset: N/A 00:09:11.288 Multiple Update Detection Support: N/A 00:09:11.288 Firmware Update Granularity: No Information Provided 00:09:11.288 Per-Namespace SMART Log: Yes 00:09:11.288 Asymmetric Namespace Access Log Page: Not Supported 00:09:11.288 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:09:11.288 Command Effects Log Page: Supported 00:09:11.288 Get Log Page Extended Data: Supported 00:09:11.288 Telemetry Log Pages: Not Supported 00:09:11.288 Persistent Event Log Pages: Not Supported 00:09:11.288 Supported Log Pages Log Page: May Support 00:09:11.288 Commands Supported & Effects Log Page: Not Supported 00:09:11.288 Feature Identifiers & Effects Log Page:May Support 00:09:11.288 NVMe-MI Commands & Effects Log Page: May Support 00:09:11.288 Data Area 4 for Telemetry Log: Not Supported 00:09:11.288 Error Log Page Entries Supported: 1 00:09:11.288 Keep Alive: Not Supported 00:09:11.288 00:09:11.288 NVM Command Set Attributes 00:09:11.288 ========================== 00:09:11.288 Submission Queue Entry Size 00:09:11.288 Max: 64 00:09:11.288 Min: 64 00:09:11.288 Completion Queue Entry Size 00:09:11.288 Max: 16 00:09:11.288 Min: 16 00:09:11.288 Number of Namespaces: 256 00:09:11.288 Compare Command: Supported 00:09:11.288 Write Uncorrectable Command: Not Supported 00:09:11.288 Dataset Management Command: Supported 00:09:11.288 Write Zeroes Command: Supported 00:09:11.288 Set Features Save Field: Supported 00:09:11.288 Reservations: Not Supported 00:09:11.288 Timestamp: Supported 00:09:11.288 Copy: Supported 00:09:11.288 Volatile Write Cache: Present 00:09:11.288 Atomic Write Unit (Normal): 1 00:09:11.288 Atomic Write Unit (PFail): 1 00:09:11.288 Atomic Compare & Write Unit: 1 00:09:11.288 Fused Compare & Write: Not Supported 00:09:11.288 Scatter-Gather List 00:09:11.288 SGL Command Set: Supported 00:09:11.288 SGL Keyed: Not Supported 00:09:11.288 SGL Bit Bucket Descriptor: Not Supported 00:09:11.288 SGL Metadata Pointer: Not Supported 00:09:11.288 Oversized SGL: Not Supported 00:09:11.288 SGL Metadata Address: Not Supported 00:09:11.288 SGL Offset: Not Supported 00:09:11.288 Transport SGL Data Block: Not Supported 00:09:11.288 Replay Protected Memory Block: Not Supported 00:09:11.288 00:09:11.288 Firmware Slot Information 00:09:11.288 ========================= 00:09:11.288 Active slot: 1 00:09:11.288 Slot 1 Firmware Revision: 1.0 00:09:11.288 00:09:11.288 00:09:11.288 Commands Supported and Effects 00:09:11.288 ============================== 00:09:11.288 Admin Commands 00:09:11.288 -------------- 00:09:11.288 Delete I/O Submission Queue (00h): Supported 00:09:11.288 Create I/O Submission Queue (01h): Supported 00:09:11.288 Get Log Page (02h): Supported 00:09:11.288 Delete I/O Completion Queue (04h): Supported 00:09:11.288 Create I/O Completion Queue (05h): Supported 00:09:11.288 Identify (06h): Supported 00:09:11.288 Abort (08h): Supported 00:09:11.288 Set Features (09h): Supported 00:09:11.288 Get Features (0Ah): Supported 00:09:11.288 Asynchronous Event Request (0Ch): Supported 00:09:11.288 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:11.288 Directive Send (19h): Supported 00:09:11.288 Directive Receive (1Ah): Supported 00:09:11.288 Virtualization Management (1Ch): Supported 00:09:11.288 Doorbell Buffer Config (7Ch): Supported 00:09:11.288 Format NVM (80h): Supported LBA-Change 00:09:11.288 I/O Commands 00:09:11.288 ------------ 00:09:11.288 Flush (00h): Supported LBA-Change 00:09:11.288 Write (01h): Supported LBA-Change 00:09:11.288 Read (02h): Supported 00:09:11.288 Compare (05h): Supported 00:09:11.288 Write Zeroes (08h): Supported LBA-Change 00:09:11.288 Dataset Management (09h): Supported LBA-Change 00:09:11.288 Unknown (0Ch): Supported 00:09:11.288 Unknown (12h): Supported 00:09:11.288 Copy (19h): Supported LBA-Change 00:09:11.288 Unknown (1Dh): Supported LBA-Change 00:09:11.288 00:09:11.288 Error Log 00:09:11.288 ========= 00:09:11.288 00:09:11.288 Arbitration 00:09:11.288 =========== 00:09:11.288 Arbitration Burst: no limit 00:09:11.288 00:09:11.288 Power Management 00:09:11.288 ================ 00:09:11.288 Number of Power States: 1 00:09:11.288 Current Power State: Power State #0 00:09:11.288 Power State #0: 00:09:11.288 Max Power: 25.00 W 00:09:11.288 Non-Operational State: Operational 00:09:11.288 Entry Latency: 16 microseconds 00:09:11.288 Exit Latency: 4 microseconds 00:09:11.288 Relative Read Throughput: 0 00:09:11.288 Relative Read Latency: 0 00:09:11.288 Relative Write Throughput: 0 00:09:11.288 Relative Write Latency: 0 00:09:11.288 Idle Power: Not Reported 00:09:11.288 Active Power: Not Reported 00:09:11.288 Non-Operational Permissive Mode: Not Supported 00:09:11.288 00:09:11.288 Health Information 00:09:11.288 ================== 00:09:11.288 Critical Warnings: 00:09:11.288 Available Spare Space: OK 00:09:11.288 Temperature: OK 00:09:11.288 Device Reliability: OK 00:09:11.288 Read Only: No 00:09:11.288 Volatile Memory Backup: OK 00:09:11.288 Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.288 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:11.288 Available Spare: 0% 00:09:11.288 Available Spare Threshold: 0% 00:09:11.288 Life Percentage Used: 0% 00:09:11.288 Data Units Read: 2098 00:09:11.288 Data Units Written: 1886 00:09:11.288 Host Read Commands: 101354 00:09:11.288 Host Write Commands: 99623 00:09:11.288 Controller Busy Time: 0 minutes 00:09:11.288 Power Cycles: 0 00:09:11.288 Power On Hours: 0 hours 00:09:11.288 Unsafe Shutdowns: 0 00:09:11.288 Unrecoverable Media Errors: 0 00:09:11.288 Lifetime Error Log Entries: 0 00:09:11.288 Warning Temperature Time: 0 minutes 00:09:11.288 Critical Temperature Time: 0 minutes 00:09:11.288 00:09:11.289 Number of Queues 00:09:11.289 ================ 00:09:11.289 Number of I/O Submission Queues: 64 00:09:11.289 Number of I/O Completion Queues: 64 00:09:11.289 00:09:11.289 ZNS Specific Controller Data 00:09:11.289 ============================ 00:09:11.289 Zone Append Size Limit: 0 00:09:11.289 00:09:11.289 00:09:11.289 Active Namespaces 00:09:11.289 ================= 00:09:11.289 Namespace ID:1 00:09:11.289 Error Recovery Timeout: Unlimited 00:09:11.289 Command Set Identifier: NVM (00h) 00:09:11.289 Deallocate: Supported 00:09:11.289 Deallocated/Unwritten Error: Supported 00:09:11.289 Deallocated Read Value: All 0x00 00:09:11.289 Deallocate in Write Zeroes: Not Supported 00:09:11.289 Deallocated Guard Field: 0xFFFF 00:09:11.289 Flush: Supported 00:09:11.289 Reservation: Not Supported 00:09:11.289 Namespace Sharing Capabilities: Private 00:09:11.289 Size (in LBAs): 1048576 (4GiB) 00:09:11.289 Capacity (in LBAs): 1048576 (4GiB) 00:09:11.289 Utilization (in LBAs): 1048576 (4GiB) 00:09:11.289 Thin Provisioning: Not Supported 00:09:11.289 Per-NS Atomic Units: No 00:09:11.289 Maximum Single Source Range Length: 128 00:09:11.289 Maximum Copy Length: 128 00:09:11.289 Maximum Source Range Count: 128 00:09:11.289 NGUID/EUI64 Never Reused: No 00:09:11.289 Namespace Write Protected: No 00:09:11.289 Number of LBA Formats: 8 00:09:11.289 Current LBA Format: LBA Format #04 00:09:11.289 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.289 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.289 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.289 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.289 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.289 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.289 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.289 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.289 00:09:11.289 NVM Specific Namespace Data 00:09:11.289 =========================== 00:09:11.289 Logical Block Storage Tag Mask: 0 00:09:11.289 Protection Information Capabilities: 00:09:11.289 16b Guard Protection Information Storage Tag Support: No 00:09:11.289 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:11.289 Storage Tag Check Read Support: No 00:09:11.289 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Namespace ID:2 00:09:11.289 Error Recovery Timeout: Unlimited 00:09:11.289 Command Set Identifier: NVM (00h) 00:09:11.289 Deallocate: Supported 00:09:11.289 Deallocated/Unwritten Error: Supported 00:09:11.289 Deallocated Read Value: All 0x00 00:09:11.289 Deallocate in Write Zeroes: Not Supported 00:09:11.289 Deallocated Guard Field: 0xFFFF 00:09:11.289 Flush: Supported 00:09:11.289 Reservation: Not Supported 00:09:11.289 Namespace Sharing Capabilities: Private 00:09:11.289 Size (in LBAs): 1048576 (4GiB) 00:09:11.289 Capacity (in LBAs): 1048576 (4GiB) 00:09:11.289 Utilization (in LBAs): 1048576 (4GiB) 00:09:11.289 Thin Provisioning: Not Supported 00:09:11.289 Per-NS Atomic Units: No 00:09:11.289 Maximum Single Source Range Length: 128 00:09:11.289 Maximum Copy Length: 128 00:09:11.289 Maximum Source Range Count: 128 00:09:11.289 NGUID/EUI64 Never Reused: No 00:09:11.289 Namespace Write Protected: No 00:09:11.289 Number of LBA Formats: 8 00:09:11.289 Current LBA Format: LBA Format #04 00:09:11.289 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.289 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.289 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.289 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.289 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.289 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.289 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.289 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.289 00:09:11.289 NVM Specific Namespace Data 00:09:11.289 =========================== 00:09:11.289 Logical Block Storage Tag Mask: 0 00:09:11.289 Protection Information Capabilities: 00:09:11.289 16b Guard Protection Information Storage Tag Support: No 00:09:11.289 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:11.289 Storage Tag Check Read Support: No 00:09:11.289 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Namespace ID:3 00:09:11.289 Error Recovery Timeout: Unlimited 00:09:11.289 Command Set Identifier: NVM (00h) 00:09:11.289 Deallocate: Supported 00:09:11.289 Deallocated/Unwritten Error: Supported 00:09:11.289 Deallocated Read Value: All 0x00 00:09:11.289 Deallocate in Write Zeroes: Not Supported 00:09:11.289 Deallocated Guard Field: 0xFFFF 00:09:11.289 Flush: Supported 00:09:11.289 Reservation: Not Supported 00:09:11.289 Namespace Sharing Capabilities: Private 00:09:11.289 Size (in LBAs): 1048576 (4GiB) 00:09:11.289 Capacity (in LBAs): 1048576 (4GiB) 00:09:11.289 Utilization (in LBAs): 1048576 (4GiB) 00:09:11.289 Thin Provisioning: Not Supported 00:09:11.289 Per-NS Atomic Units: No 00:09:11.289 Maximum Single Source Range Length: 128 00:09:11.289 Maximum Copy Length: 128 00:09:11.289 Maximum Source Range Count: 128 00:09:11.289 NGUID/EUI64 Never Reused: No 00:09:11.289 Namespace Write Protected: No 00:09:11.289 Number of LBA Formats: 8 00:09:11.289 Current LBA Format: LBA Format #04 00:09:11.289 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.289 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.289 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.289 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.289 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.289 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.289 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.289 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.289 00:09:11.289 NVM Specific Namespace Data 00:09:11.289 =========================== 00:09:11.289 Logical Block Storage Tag Mask: 0 00:09:11.289 Protection Information Capabilities: 00:09:11.289 16b Guard Protection Information Storage Tag Support: No 00:09:11.289 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:11.289 Storage Tag Check Read Support: No 00:09:11.289 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.289 17:59:27 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:09:11.289 17:59:27 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:09:11.547 ===================================================== 00:09:11.547 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:11.547 ===================================================== 00:09:11.547 Controller Capabilities/Features 00:09:11.547 ================================ 00:09:11.547 Vendor ID: 1b36 00:09:11.547 Subsystem Vendor ID: 1af4 00:09:11.547 Serial Number: 12343 00:09:11.547 Model Number: QEMU NVMe Ctrl 00:09:11.547 Firmware Version: 8.0.0 00:09:11.547 Recommended Arb Burst: 6 00:09:11.547 IEEE OUI Identifier: 00 54 52 00:09:11.547 Multi-path I/O 00:09:11.547 May have multiple subsystem ports: No 00:09:11.547 May have multiple controllers: Yes 00:09:11.547 Associated with SR-IOV VF: No 00:09:11.547 Max Data Transfer Size: 524288 00:09:11.547 Max Number of Namespaces: 256 00:09:11.547 Max Number of I/O Queues: 64 00:09:11.547 NVMe Specification Version (VS): 1.4 00:09:11.547 NVMe Specification Version (Identify): 1.4 00:09:11.547 Maximum Queue Entries: 2048 00:09:11.547 Contiguous Queues Required: Yes 00:09:11.547 Arbitration Mechanisms Supported 00:09:11.547 Weighted Round Robin: Not Supported 00:09:11.547 Vendor Specific: Not Supported 00:09:11.547 Reset Timeout: 7500 ms 00:09:11.547 Doorbell Stride: 4 bytes 00:09:11.547 NVM Subsystem Reset: Not Supported 00:09:11.547 Command Sets Supported 00:09:11.547 NVM Command Set: Supported 00:09:11.547 Boot Partition: Not Supported 00:09:11.547 Memory Page Size Minimum: 4096 bytes 00:09:11.547 Memory Page Size Maximum: 65536 bytes 00:09:11.547 Persistent Memory Region: Not Supported 00:09:11.547 Optional Asynchronous Events Supported 00:09:11.547 Namespace Attribute Notices: Supported 00:09:11.547 Firmware Activation Notices: Not Supported 00:09:11.547 ANA Change Notices: Not Supported 00:09:11.547 PLE Aggregate Log Change Notices: Not Supported 00:09:11.548 LBA Status Info Alert Notices: Not Supported 00:09:11.548 EGE Aggregate Log Change Notices: Not Supported 00:09:11.548 Normal NVM Subsystem Shutdown event: Not Supported 00:09:11.548 Zone Descriptor Change Notices: Not Supported 00:09:11.548 Discovery Log Change Notices: Not Supported 00:09:11.548 Controller Attributes 00:09:11.548 128-bit Host Identifier: Not Supported 00:09:11.548 Non-Operational Permissive Mode: Not Supported 00:09:11.548 NVM Sets: Not Supported 00:09:11.548 Read Recovery Levels: Not Supported 00:09:11.548 Endurance Groups: Supported 00:09:11.548 Predictable Latency Mode: Not Supported 00:09:11.548 Traffic Based Keep ALive: Not Supported 00:09:11.548 Namespace Granularity: Not Supported 00:09:11.548 SQ Associations: Not Supported 00:09:11.548 UUID List: Not Supported 00:09:11.548 Multi-Domain Subsystem: Not Supported 00:09:11.548 Fixed Capacity Management: Not Supported 00:09:11.548 Variable Capacity Management: Not Supported 00:09:11.548 Delete Endurance Group: Not Supported 00:09:11.548 Delete NVM Set: Not Supported 00:09:11.548 Extended LBA Formats Supported: Supported 00:09:11.548 Flexible Data Placement Supported: Supported 00:09:11.548 00:09:11.548 Controller Memory Buffer Support 00:09:11.548 ================================ 00:09:11.548 Supported: No 00:09:11.548 00:09:11.548 Persistent Memory Region Support 00:09:11.548 ================================ 00:09:11.548 Supported: No 00:09:11.548 00:09:11.548 Admin Command Set Attributes 00:09:11.548 ============================ 00:09:11.548 Security Send/Receive: Not Supported 00:09:11.548 Format NVM: Supported 00:09:11.548 Firmware Activate/Download: Not Supported 00:09:11.548 Namespace Management: Supported 00:09:11.548 Device Self-Test: Not Supported 00:09:11.548 Directives: Supported 00:09:11.548 NVMe-MI: Not Supported 00:09:11.548 Virtualization Management: Not Supported 00:09:11.548 Doorbell Buffer Config: Supported 00:09:11.548 Get LBA Status Capability: Not Supported 00:09:11.548 Command & Feature Lockdown Capability: Not Supported 00:09:11.548 Abort Command Limit: 4 00:09:11.548 Async Event Request Limit: 4 00:09:11.548 Number of Firmware Slots: N/A 00:09:11.548 Firmware Slot 1 Read-Only: N/A 00:09:11.548 Firmware Activation Without Reset: N/A 00:09:11.548 Multiple Update Detection Support: N/A 00:09:11.548 Firmware Update Granularity: No Information Provided 00:09:11.548 Per-Namespace SMART Log: Yes 00:09:11.548 Asymmetric Namespace Access Log Page: Not Supported 00:09:11.548 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:09:11.548 Command Effects Log Page: Supported 00:09:11.548 Get Log Page Extended Data: Supported 00:09:11.548 Telemetry Log Pages: Not Supported 00:09:11.548 Persistent Event Log Pages: Not Supported 00:09:11.548 Supported Log Pages Log Page: May Support 00:09:11.548 Commands Supported & Effects Log Page: Not Supported 00:09:11.548 Feature Identifiers & Effects Log Page:May Support 00:09:11.548 NVMe-MI Commands & Effects Log Page: May Support 00:09:11.548 Data Area 4 for Telemetry Log: Not Supported 00:09:11.548 Error Log Page Entries Supported: 1 00:09:11.548 Keep Alive: Not Supported 00:09:11.548 00:09:11.548 NVM Command Set Attributes 00:09:11.548 ========================== 00:09:11.548 Submission Queue Entry Size 00:09:11.548 Max: 64 00:09:11.548 Min: 64 00:09:11.548 Completion Queue Entry Size 00:09:11.548 Max: 16 00:09:11.548 Min: 16 00:09:11.548 Number of Namespaces: 256 00:09:11.548 Compare Command: Supported 00:09:11.548 Write Uncorrectable Command: Not Supported 00:09:11.548 Dataset Management Command: Supported 00:09:11.548 Write Zeroes Command: Supported 00:09:11.548 Set Features Save Field: Supported 00:09:11.548 Reservations: Not Supported 00:09:11.548 Timestamp: Supported 00:09:11.548 Copy: Supported 00:09:11.548 Volatile Write Cache: Present 00:09:11.548 Atomic Write Unit (Normal): 1 00:09:11.548 Atomic Write Unit (PFail): 1 00:09:11.548 Atomic Compare & Write Unit: 1 00:09:11.548 Fused Compare & Write: Not Supported 00:09:11.548 Scatter-Gather List 00:09:11.548 SGL Command Set: Supported 00:09:11.548 SGL Keyed: Not Supported 00:09:11.548 SGL Bit Bucket Descriptor: Not Supported 00:09:11.548 SGL Metadata Pointer: Not Supported 00:09:11.548 Oversized SGL: Not Supported 00:09:11.548 SGL Metadata Address: Not Supported 00:09:11.548 SGL Offset: Not Supported 00:09:11.548 Transport SGL Data Block: Not Supported 00:09:11.548 Replay Protected Memory Block: Not Supported 00:09:11.548 00:09:11.548 Firmware Slot Information 00:09:11.548 ========================= 00:09:11.548 Active slot: 1 00:09:11.548 Slot 1 Firmware Revision: 1.0 00:09:11.548 00:09:11.548 00:09:11.548 Commands Supported and Effects 00:09:11.548 ============================== 00:09:11.548 Admin Commands 00:09:11.548 -------------- 00:09:11.548 Delete I/O Submission Queue (00h): Supported 00:09:11.548 Create I/O Submission Queue (01h): Supported 00:09:11.548 Get Log Page (02h): Supported 00:09:11.548 Delete I/O Completion Queue (04h): Supported 00:09:11.548 Create I/O Completion Queue (05h): Supported 00:09:11.548 Identify (06h): Supported 00:09:11.548 Abort (08h): Supported 00:09:11.548 Set Features (09h): Supported 00:09:11.548 Get Features (0Ah): Supported 00:09:11.548 Asynchronous Event Request (0Ch): Supported 00:09:11.548 Namespace Attachment (15h): Supported NS-Inventory-Change 00:09:11.548 Directive Send (19h): Supported 00:09:11.548 Directive Receive (1Ah): Supported 00:09:11.548 Virtualization Management (1Ch): Supported 00:09:11.548 Doorbell Buffer Config (7Ch): Supported 00:09:11.548 Format NVM (80h): Supported LBA-Change 00:09:11.548 I/O Commands 00:09:11.548 ------------ 00:09:11.548 Flush (00h): Supported LBA-Change 00:09:11.548 Write (01h): Supported LBA-Change 00:09:11.548 Read (02h): Supported 00:09:11.548 Compare (05h): Supported 00:09:11.548 Write Zeroes (08h): Supported LBA-Change 00:09:11.548 Dataset Management (09h): Supported LBA-Change 00:09:11.548 Unknown (0Ch): Supported 00:09:11.548 Unknown (12h): Supported 00:09:11.548 Copy (19h): Supported LBA-Change 00:09:11.548 Unknown (1Dh): Supported LBA-Change 00:09:11.548 00:09:11.548 Error Log 00:09:11.548 ========= 00:09:11.548 00:09:11.548 Arbitration 00:09:11.548 =========== 00:09:11.548 Arbitration Burst: no limit 00:09:11.548 00:09:11.548 Power Management 00:09:11.548 ================ 00:09:11.548 Number of Power States: 1 00:09:11.548 Current Power State: Power State #0 00:09:11.548 Power State #0: 00:09:11.548 Max Power: 25.00 W 00:09:11.548 Non-Operational State: Operational 00:09:11.548 Entry Latency: 16 microseconds 00:09:11.548 Exit Latency: 4 microseconds 00:09:11.548 Relative Read Throughput: 0 00:09:11.548 Relative Read Latency: 0 00:09:11.548 Relative Write Throughput: 0 00:09:11.549 Relative Write Latency: 0 00:09:11.549 Idle Power: Not Reported 00:09:11.549 Active Power: Not Reported 00:09:11.549 Non-Operational Permissive Mode: Not Supported 00:09:11.549 00:09:11.549 Health Information 00:09:11.549 ================== 00:09:11.549 Critical Warnings: 00:09:11.549 Available Spare Space: OK 00:09:11.549 Temperature: OK 00:09:11.549 Device Reliability: OK 00:09:11.549 Read Only: No 00:09:11.549 Volatile Memory Backup: OK 00:09:11.549 Current Temperature: 323 Kelvin (50 Celsius) 00:09:11.549 Temperature Threshold: 343 Kelvin (70 Celsius) 00:09:11.549 Available Spare: 0% 00:09:11.549 Available Spare Threshold: 0% 00:09:11.549 Life Percentage Used: 0% 00:09:11.549 Data Units Read: 790 00:09:11.549 Data Units Written: 719 00:09:11.549 Host Read Commands: 34626 00:09:11.549 Host Write Commands: 34049 00:09:11.549 Controller Busy Time: 0 minutes 00:09:11.549 Power Cycles: 0 00:09:11.549 Power On Hours: 0 hours 00:09:11.549 Unsafe Shutdowns: 0 00:09:11.549 Unrecoverable Media Errors: 0 00:09:11.549 Lifetime Error Log Entries: 0 00:09:11.549 Warning Temperature Time: 0 minutes 00:09:11.549 Critical Temperature Time: 0 minutes 00:09:11.549 00:09:11.549 Number of Queues 00:09:11.549 ================ 00:09:11.549 Number of I/O Submission Queues: 64 00:09:11.549 Number of I/O Completion Queues: 64 00:09:11.549 00:09:11.549 ZNS Specific Controller Data 00:09:11.549 ============================ 00:09:11.549 Zone Append Size Limit: 0 00:09:11.549 00:09:11.549 00:09:11.549 Active Namespaces 00:09:11.549 ================= 00:09:11.549 Namespace ID:1 00:09:11.549 Error Recovery Timeout: Unlimited 00:09:11.549 Command Set Identifier: NVM (00h) 00:09:11.549 Deallocate: Supported 00:09:11.549 Deallocated/Unwritten Error: Supported 00:09:11.549 Deallocated Read Value: All 0x00 00:09:11.549 Deallocate in Write Zeroes: Not Supported 00:09:11.549 Deallocated Guard Field: 0xFFFF 00:09:11.549 Flush: Supported 00:09:11.549 Reservation: Not Supported 00:09:11.549 Namespace Sharing Capabilities: Multiple Controllers 00:09:11.549 Size (in LBAs): 262144 (1GiB) 00:09:11.549 Capacity (in LBAs): 262144 (1GiB) 00:09:11.549 Utilization (in LBAs): 262144 (1GiB) 00:09:11.549 Thin Provisioning: Not Supported 00:09:11.549 Per-NS Atomic Units: No 00:09:11.549 Maximum Single Source Range Length: 128 00:09:11.549 Maximum Copy Length: 128 00:09:11.549 Maximum Source Range Count: 128 00:09:11.549 NGUID/EUI64 Never Reused: No 00:09:11.549 Namespace Write Protected: No 00:09:11.549 Endurance group ID: 1 00:09:11.549 Number of LBA Formats: 8 00:09:11.549 Current LBA Format: LBA Format #04 00:09:11.549 LBA Format #00: Data Size: 512 Metadata Size: 0 00:09:11.549 LBA Format #01: Data Size: 512 Metadata Size: 8 00:09:11.549 LBA Format #02: Data Size: 512 Metadata Size: 16 00:09:11.549 LBA Format #03: Data Size: 512 Metadata Size: 64 00:09:11.549 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:09:11.549 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:09:11.549 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:09:11.549 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:09:11.549 00:09:11.549 Get Feature FDP: 00:09:11.549 ================ 00:09:11.549 Enabled: Yes 00:09:11.549 FDP configuration index: 0 00:09:11.549 00:09:11.549 FDP configurations log page 00:09:11.549 =========================== 00:09:11.549 Number of FDP configurations: 1 00:09:11.549 Version: 0 00:09:11.549 Size: 112 00:09:11.549 FDP Configuration Descriptor: 0 00:09:11.549 Descriptor Size: 96 00:09:11.549 Reclaim Group Identifier format: 2 00:09:11.549 FDP Volatile Write Cache: Not Present 00:09:11.549 FDP Configuration: Valid 00:09:11.549 Vendor Specific Size: 0 00:09:11.549 Number of Reclaim Groups: 2 00:09:11.549 Number of Recalim Unit Handles: 8 00:09:11.549 Max Placement Identifiers: 128 00:09:11.549 Number of Namespaces Suppprted: 256 00:09:11.549 Reclaim unit Nominal Size: 6000000 bytes 00:09:11.549 Estimated Reclaim Unit Time Limit: Not Reported 00:09:11.549 RUH Desc #000: RUH Type: Initially Isolated 00:09:11.549 RUH Desc #001: RUH Type: Initially Isolated 00:09:11.549 RUH Desc #002: RUH Type: Initially Isolated 00:09:11.549 RUH Desc #003: RUH Type: Initially Isolated 00:09:11.549 RUH Desc #004: RUH Type: Initially Isolated 00:09:11.549 RUH Desc #005: RUH Type: Initially Isolated 00:09:11.549 RUH Desc #006: RUH Type: Initially Isolated 00:09:11.549 RUH Desc #007: RUH Type: Initially Isolated 00:09:11.549 00:09:11.549 FDP reclaim unit handle usage log page 00:09:11.549 ====================================== 00:09:11.549 Number of Reclaim Unit Handles: 8 00:09:11.549 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:11.549 RUH Usage Desc #001: RUH Attributes: Unused 00:09:11.549 RUH Usage Desc #002: RUH Attributes: Unused 00:09:11.549 RUH Usage Desc #003: RUH Attributes: Unused 00:09:11.549 RUH Usage Desc #004: RUH Attributes: Unused 00:09:11.549 RUH Usage Desc #005: RUH Attributes: Unused 00:09:11.549 RUH Usage Desc #006: RUH Attributes: Unused 00:09:11.549 RUH Usage Desc #007: RUH Attributes: Unused 00:09:11.549 00:09:11.549 FDP statistics log page 00:09:11.549 ======================= 00:09:11.549 Host bytes with metadata written: 445030400 00:09:11.549 Media bytes with metadata written: 445095936 00:09:11.549 Media bytes erased: 0 00:09:11.549 00:09:11.549 FDP events log page 00:09:11.549 =================== 00:09:11.549 Number of FDP events: 0 00:09:11.549 00:09:11.549 NVM Specific Namespace Data 00:09:11.549 =========================== 00:09:11.549 Logical Block Storage Tag Mask: 0 00:09:11.549 Protection Information Capabilities: 00:09:11.549 16b Guard Protection Information Storage Tag Support: No 00:09:11.549 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:09:11.549 Storage Tag Check Read Support: No 00:09:11.549 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.549 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.549 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.549 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.549 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.549 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.549 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.549 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:09:11.549 ************************************ 00:09:11.549 END TEST nvme_identify 00:09:11.549 ************************************ 00:09:11.549 00:09:11.549 real 0m1.834s 00:09:11.549 user 0m0.739s 00:09:11.549 sys 0m0.856s 00:09:11.549 17:59:27 nvme.nvme_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:11.549 17:59:27 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:09:11.806 17:59:28 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:09:11.806 17:59:28 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:11.806 17:59:28 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:11.806 17:59:28 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:11.806 ************************************ 00:09:11.806 START TEST nvme_perf 00:09:11.806 ************************************ 00:09:11.806 17:59:28 nvme.nvme_perf -- common/autotest_common.sh@1127 -- # nvme_perf 00:09:11.806 17:59:28 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:09:13.182 Initializing NVMe Controllers 00:09:13.182 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:13.182 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:13.182 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:13.182 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:13.182 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:13.182 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:13.182 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:13.182 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:13.182 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:13.182 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:13.182 Initialization complete. Launching workers. 00:09:13.182 ======================================================== 00:09:13.182 Latency(us) 00:09:13.182 Device Information : IOPS MiB/s Average min max 00:09:13.182 PCIE (0000:00:10.0) NSID 1 from core 0: 11758.97 137.80 10921.84 7803.69 37708.60 00:09:13.182 PCIE (0000:00:11.0) NSID 1 from core 0: 11758.97 137.80 10899.47 7913.90 35054.03 00:09:13.182 PCIE (0000:00:13.0) NSID 1 from core 0: 11758.97 137.80 10876.04 7911.71 32994.11 00:09:13.182 PCIE (0000:00:12.0) NSID 1 from core 0: 11758.97 137.80 10851.96 7926.64 30347.37 00:09:13.182 PCIE (0000:00:12.0) NSID 2 from core 0: 11758.97 137.80 10828.45 7916.74 27770.82 00:09:13.182 PCIE (0000:00:12.0) NSID 3 from core 0: 11758.97 137.80 10804.65 7930.99 25194.04 00:09:13.182 ======================================================== 00:09:13.182 Total : 70553.84 826.80 10863.73 7803.69 37708.60 00:09:13.182 00:09:13.182 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:13.182 ================================================================================= 00:09:13.182 1.00000% : 8281.367us 00:09:13.182 10.00000% : 8817.571us 00:09:13.182 25.00000% : 9175.040us 00:09:13.182 50.00000% : 9711.244us 00:09:13.182 75.00000% : 10843.229us 00:09:13.182 90.00000% : 13643.404us 00:09:13.182 95.00000% : 20614.051us 00:09:13.182 98.00000% : 22520.553us 00:09:13.182 99.00000% : 26691.025us 00:09:13.182 99.50000% : 35270.284us 00:09:13.182 99.90000% : 37415.098us 00:09:13.182 99.99000% : 37653.411us 00:09:13.182 99.99900% : 37891.724us 00:09:13.182 99.99990% : 37891.724us 00:09:13.182 99.99999% : 37891.724us 00:09:13.182 00:09:13.182 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:13.182 ================================================================================= 00:09:13.182 1.00000% : 8340.945us 00:09:13.182 10.00000% : 8877.149us 00:09:13.182 25.00000% : 9175.040us 00:09:13.182 50.00000% : 9711.244us 00:09:13.182 75.00000% : 10843.229us 00:09:13.182 90.00000% : 13583.825us 00:09:13.182 95.00000% : 20494.895us 00:09:13.182 98.00000% : 22163.084us 00:09:13.182 99.00000% : 25261.149us 00:09:13.182 99.50000% : 32887.156us 00:09:13.182 99.90000% : 34793.658us 00:09:13.182 99.99000% : 35031.971us 00:09:13.182 99.99900% : 35270.284us 00:09:13.182 99.99990% : 35270.284us 00:09:13.182 99.99999% : 35270.284us 00:09:13.182 00:09:13.182 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:13.182 ================================================================================= 00:09:13.182 1.00000% : 8340.945us 00:09:13.182 10.00000% : 8877.149us 00:09:13.182 25.00000% : 9175.040us 00:09:13.182 50.00000% : 9651.665us 00:09:13.182 75.00000% : 10843.229us 00:09:13.182 90.00000% : 13583.825us 00:09:13.182 95.00000% : 20614.051us 00:09:13.182 98.00000% : 22282.240us 00:09:13.182 99.00000% : 23235.491us 00:09:13.182 99.50000% : 30742.342us 00:09:13.182 99.90000% : 32648.844us 00:09:13.182 99.99000% : 33125.469us 00:09:13.182 99.99900% : 33125.469us 00:09:13.182 99.99990% : 33125.469us 00:09:13.182 99.99999% : 33125.469us 00:09:13.182 00:09:13.182 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:13.182 ================================================================================= 00:09:13.182 1.00000% : 8340.945us 00:09:13.182 10.00000% : 8877.149us 00:09:13.182 25.00000% : 9175.040us 00:09:13.182 50.00000% : 9651.665us 00:09:13.182 75.00000% : 10843.229us 00:09:13.182 90.00000% : 13762.560us 00:09:13.182 95.00000% : 20614.051us 00:09:13.182 98.00000% : 22043.927us 00:09:13.182 99.00000% : 22639.709us 00:09:13.182 99.50000% : 28120.902us 00:09:13.182 99.90000% : 29908.247us 00:09:13.182 99.99000% : 30384.873us 00:09:13.182 99.99900% : 30384.873us 00:09:13.182 99.99990% : 30384.873us 00:09:13.182 99.99999% : 30384.873us 00:09:13.182 00:09:13.182 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:13.182 ================================================================================= 00:09:13.182 1.00000% : 8281.367us 00:09:13.182 10.00000% : 8877.149us 00:09:13.182 25.00000% : 9175.040us 00:09:13.182 50.00000% : 9651.665us 00:09:13.182 75.00000% : 10843.229us 00:09:13.182 90.00000% : 13822.138us 00:09:13.182 95.00000% : 20375.738us 00:09:13.182 98.00000% : 22043.927us 00:09:13.182 99.00000% : 22639.709us 00:09:13.182 99.50000% : 25499.462us 00:09:13.182 99.90000% : 27405.964us 00:09:13.182 99.99000% : 27763.433us 00:09:13.182 99.99900% : 27882.589us 00:09:13.182 99.99990% : 27882.589us 00:09:13.183 99.99999% : 27882.589us 00:09:13.183 00:09:13.183 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:13.183 ================================================================================= 00:09:13.183 1.00000% : 8340.945us 00:09:13.183 10.00000% : 8877.149us 00:09:13.183 25.00000% : 9175.040us 00:09:13.183 50.00000% : 9651.665us 00:09:13.183 75.00000% : 10843.229us 00:09:13.183 90.00000% : 13881.716us 00:09:13.183 95.00000% : 20375.738us 00:09:13.183 98.00000% : 22043.927us 00:09:13.183 99.00000% : 22639.709us 00:09:13.183 99.50000% : 23116.335us 00:09:13.183 99.90000% : 24784.524us 00:09:13.183 99.99000% : 25261.149us 00:09:13.183 99.99900% : 25261.149us 00:09:13.183 99.99990% : 25261.149us 00:09:13.183 99.99999% : 25261.149us 00:09:13.183 00:09:13.183 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:13.183 ============================================================================== 00:09:13.183 Range in us Cumulative IO count 00:09:13.183 7745.164 - 7804.742: 0.0085% ( 1) 00:09:13.183 7804.742 - 7864.320: 0.0849% ( 9) 00:09:13.183 7864.320 - 7923.898: 0.2293% ( 17) 00:09:13.183 7923.898 - 7983.476: 0.3312% ( 12) 00:09:13.183 7983.476 - 8043.055: 0.4925% ( 19) 00:09:13.183 8043.055 - 8102.633: 0.6284% ( 16) 00:09:13.183 8102.633 - 8162.211: 0.7473% ( 14) 00:09:13.183 8162.211 - 8221.789: 0.9766% ( 27) 00:09:13.183 8221.789 - 8281.367: 1.2398% ( 31) 00:09:13.183 8281.367 - 8340.945: 1.5880% ( 41) 00:09:13.183 8340.945 - 8400.524: 1.9361% ( 41) 00:09:13.183 8400.524 - 8460.102: 2.5221% ( 69) 00:09:13.183 8460.102 - 8519.680: 3.1844% ( 78) 00:09:13.183 8519.680 - 8579.258: 4.1101% ( 109) 00:09:13.183 8579.258 - 8638.836: 5.3329% ( 144) 00:09:13.183 8638.836 - 8698.415: 6.8444% ( 178) 00:09:13.183 8698.415 - 8757.993: 8.5598% ( 202) 00:09:13.183 8757.993 - 8817.571: 10.6658% ( 248) 00:09:13.183 8817.571 - 8877.149: 13.0435% ( 280) 00:09:13.183 8877.149 - 8936.727: 15.6335% ( 305) 00:09:13.183 8936.727 - 8996.305: 18.3084% ( 315) 00:09:13.183 8996.305 - 9055.884: 20.9579% ( 312) 00:09:13.183 9055.884 - 9115.462: 23.7262% ( 326) 00:09:13.183 9115.462 - 9175.040: 26.3077% ( 304) 00:09:13.183 9175.040 - 9234.618: 29.0166% ( 319) 00:09:13.183 9234.618 - 9294.196: 31.7171% ( 318) 00:09:13.183 9294.196 - 9353.775: 34.4260% ( 319) 00:09:13.183 9353.775 - 9413.353: 37.0329% ( 307) 00:09:13.183 9413.353 - 9472.931: 39.7843% ( 324) 00:09:13.183 9472.931 - 9532.509: 42.7055% ( 344) 00:09:13.183 9532.509 - 9592.087: 45.3974% ( 317) 00:09:13.183 9592.087 - 9651.665: 48.0044% ( 307) 00:09:13.183 9651.665 - 9711.244: 50.6454% ( 311) 00:09:13.183 9711.244 - 9770.822: 53.1675% ( 297) 00:09:13.183 9770.822 - 9830.400: 55.5452% ( 280) 00:09:13.183 9830.400 - 9889.978: 57.7106% ( 255) 00:09:13.183 9889.978 - 9949.556: 59.6467% ( 228) 00:09:13.183 9949.556 - 10009.135: 61.4046% ( 207) 00:09:13.183 10009.135 - 10068.713: 62.9925% ( 187) 00:09:13.183 10068.713 - 10128.291: 64.4701% ( 174) 00:09:13.183 10128.291 - 10187.869: 65.5656% ( 129) 00:09:13.183 10187.869 - 10247.447: 66.5336% ( 114) 00:09:13.183 10247.447 - 10307.025: 67.4507% ( 108) 00:09:13.183 10307.025 - 10366.604: 68.3679% ( 108) 00:09:13.183 10366.604 - 10426.182: 69.3444% ( 115) 00:09:13.183 10426.182 - 10485.760: 70.2361% ( 105) 00:09:13.183 10485.760 - 10545.338: 71.1022% ( 102) 00:09:13.183 10545.338 - 10604.916: 71.9599% ( 101) 00:09:13.183 10604.916 - 10664.495: 72.8006% ( 99) 00:09:13.183 10664.495 - 10724.073: 73.6838% ( 104) 00:09:13.183 10724.073 - 10783.651: 74.4480% ( 90) 00:09:13.183 10783.651 - 10843.229: 75.2038% ( 89) 00:09:13.183 10843.229 - 10902.807: 75.9086% ( 83) 00:09:13.183 10902.807 - 10962.385: 76.5880% ( 80) 00:09:13.183 10962.385 - 11021.964: 77.4032% ( 96) 00:09:13.183 11021.964 - 11081.542: 78.1675% ( 90) 00:09:13.183 11081.542 - 11141.120: 78.9062% ( 87) 00:09:13.183 11141.120 - 11200.698: 79.6111% ( 83) 00:09:13.183 11200.698 - 11260.276: 80.2819% ( 79) 00:09:13.183 11260.276 - 11319.855: 80.8509% ( 67) 00:09:13.183 11319.855 - 11379.433: 81.4793% ( 74) 00:09:13.183 11379.433 - 11439.011: 82.0652% ( 69) 00:09:13.183 11439.011 - 11498.589: 82.5662% ( 59) 00:09:13.183 11498.589 - 11558.167: 83.0927% ( 62) 00:09:13.183 11558.167 - 11617.745: 83.5173% ( 50) 00:09:13.183 11617.745 - 11677.324: 83.9249% ( 48) 00:09:13.183 11677.324 - 11736.902: 84.2306% ( 36) 00:09:13.183 11736.902 - 11796.480: 84.4939% ( 31) 00:09:13.183 11796.480 - 11856.058: 84.7401% ( 29) 00:09:13.183 11856.058 - 11915.636: 84.9949% ( 30) 00:09:13.183 11915.636 - 11975.215: 85.2497% ( 30) 00:09:13.183 11975.215 - 12034.793: 85.5044% ( 30) 00:09:13.183 12034.793 - 12094.371: 85.7422% ( 28) 00:09:13.183 12094.371 - 12153.949: 85.9205% ( 21) 00:09:13.183 12153.949 - 12213.527: 86.1498% ( 27) 00:09:13.183 12213.527 - 12273.105: 86.3791% ( 27) 00:09:13.183 12273.105 - 12332.684: 86.5234% ( 17) 00:09:13.183 12332.684 - 12392.262: 86.6678% ( 17) 00:09:13.183 12392.262 - 12451.840: 86.7952% ( 15) 00:09:13.183 12451.840 - 12511.418: 86.9310% ( 16) 00:09:13.183 12511.418 - 12570.996: 87.0414% ( 13) 00:09:13.183 12570.996 - 12630.575: 87.1688% ( 15) 00:09:13.183 12630.575 - 12690.153: 87.3217% ( 18) 00:09:13.183 12690.153 - 12749.731: 87.4321% ( 13) 00:09:13.183 12749.731 - 12809.309: 87.5849% ( 18) 00:09:13.183 12809.309 - 12868.887: 87.7378% ( 18) 00:09:13.183 12868.887 - 12928.465: 87.8651% ( 15) 00:09:13.183 12928.465 - 12988.044: 88.0265% ( 19) 00:09:13.183 12988.044 - 13047.622: 88.1793% ( 18) 00:09:13.183 13047.622 - 13107.200: 88.3577% ( 21) 00:09:13.183 13107.200 - 13166.778: 88.5615% ( 24) 00:09:13.183 13166.778 - 13226.356: 88.7823% ( 26) 00:09:13.183 13226.356 - 13285.935: 89.0031% ( 26) 00:09:13.183 13285.935 - 13345.513: 89.1984% ( 23) 00:09:13.183 13345.513 - 13405.091: 89.3767% ( 21) 00:09:13.183 13405.091 - 13464.669: 89.5720% ( 23) 00:09:13.183 13464.669 - 13524.247: 89.7673% ( 23) 00:09:13.183 13524.247 - 13583.825: 89.9457% ( 21) 00:09:13.183 13583.825 - 13643.404: 90.1325% ( 22) 00:09:13.183 13643.404 - 13702.982: 90.2938% ( 19) 00:09:13.183 13702.982 - 13762.560: 90.4382% ( 17) 00:09:13.183 13762.560 - 13822.138: 90.5910% ( 18) 00:09:13.183 13822.138 - 13881.716: 90.7014% ( 13) 00:09:13.183 13881.716 - 13941.295: 90.7524% ( 6) 00:09:13.183 13941.295 - 14000.873: 90.8288% ( 9) 00:09:13.183 14000.873 - 14060.451: 90.9137% ( 10) 00:09:13.183 14060.451 - 14120.029: 90.9817% ( 8) 00:09:13.183 14120.029 - 14179.607: 91.0921% ( 13) 00:09:13.183 14179.607 - 14239.185: 91.1600% ( 8) 00:09:13.183 14239.185 - 14298.764: 91.2024% ( 5) 00:09:13.183 14298.764 - 14358.342: 91.2364% ( 4) 00:09:13.183 14358.342 - 14417.920: 91.2704% ( 4) 00:09:13.183 14417.920 - 14477.498: 91.2959% ( 3) 00:09:13.183 14477.498 - 14537.076: 91.3383% ( 5) 00:09:13.183 14537.076 - 14596.655: 91.3893% ( 6) 00:09:13.183 14596.655 - 14656.233: 91.4232% ( 4) 00:09:13.183 14656.233 - 14715.811: 91.4572% ( 4) 00:09:13.183 14715.811 - 14775.389: 91.4742% ( 2) 00:09:13.183 14775.389 - 14834.967: 91.4912% ( 2) 00:09:13.183 14834.967 - 14894.545: 91.5082% ( 2) 00:09:13.183 14894.545 - 14954.124: 91.5251% ( 2) 00:09:13.183 14954.124 - 15013.702: 91.5336% ( 1) 00:09:13.183 15013.702 - 15073.280: 91.5591% ( 3) 00:09:13.183 15073.280 - 15132.858: 91.5676% ( 1) 00:09:13.183 15132.858 - 15192.436: 91.5846% ( 2) 00:09:13.183 15192.436 - 15252.015: 91.6101% ( 3) 00:09:13.183 15252.015 - 15371.171: 91.6355% ( 3) 00:09:13.183 15371.171 - 15490.327: 91.6695% ( 4) 00:09:13.183 15490.327 - 15609.484: 91.7120% ( 5) 00:09:13.183 15609.484 - 15728.640: 91.7374% ( 3) 00:09:13.183 15728.640 - 15847.796: 91.7629% ( 3) 00:09:13.183 15847.796 - 15966.953: 91.7884% ( 3) 00:09:13.183 15966.953 - 16086.109: 91.8139% ( 3) 00:09:13.183 16086.109 - 16205.265: 91.8478% ( 4) 00:09:13.183 16562.735 - 16681.891: 91.8818% ( 4) 00:09:13.183 16681.891 - 16801.047: 91.9243% ( 5) 00:09:13.183 16801.047 - 16920.204: 91.9752% ( 6) 00:09:13.183 16920.204 - 17039.360: 92.0346% ( 7) 00:09:13.183 17039.360 - 17158.516: 92.0856% ( 6) 00:09:13.183 17158.516 - 17277.673: 92.1281% ( 5) 00:09:13.183 17277.673 - 17396.829: 92.1790% ( 6) 00:09:13.183 17396.829 - 17515.985: 92.2300% ( 6) 00:09:13.183 17515.985 - 17635.142: 92.2724% ( 5) 00:09:13.183 17635.142 - 17754.298: 92.3149% ( 5) 00:09:13.183 17754.298 - 17873.455: 92.3573% ( 5) 00:09:13.183 17873.455 - 17992.611: 92.4083% ( 6) 00:09:13.183 17992.611 - 18111.767: 92.4592% ( 6) 00:09:13.183 18111.767 - 18230.924: 92.5102% ( 6) 00:09:13.183 18230.924 - 18350.080: 92.5526% ( 5) 00:09:13.183 18350.080 - 18469.236: 92.6036% ( 6) 00:09:13.183 18469.236 - 18588.393: 92.6546% ( 6) 00:09:13.183 18588.393 - 18707.549: 92.6970% ( 5) 00:09:13.183 18707.549 - 18826.705: 92.7565% ( 7) 00:09:13.183 18826.705 - 18945.862: 92.8499% ( 11) 00:09:13.183 18945.862 - 19065.018: 92.9857% ( 16) 00:09:13.183 19065.018 - 19184.175: 93.1471% ( 19) 00:09:13.183 19184.175 - 19303.331: 93.2660% ( 14) 00:09:13.183 19303.331 - 19422.487: 93.4018% ( 16) 00:09:13.183 19422.487 - 19541.644: 93.5971% ( 23) 00:09:13.183 19541.644 - 19660.800: 93.7500% ( 18) 00:09:13.183 19660.800 - 19779.956: 93.8774% ( 15) 00:09:13.183 19779.956 - 19899.113: 94.1151% ( 28) 00:09:13.183 19899.113 - 20018.269: 94.2765% ( 19) 00:09:13.183 20018.269 - 20137.425: 94.4718% ( 23) 00:09:13.183 20137.425 - 20256.582: 94.6162% ( 17) 00:09:13.183 20256.582 - 20375.738: 94.8030% ( 22) 00:09:13.183 20375.738 - 20494.895: 94.9813% ( 21) 00:09:13.183 20494.895 - 20614.051: 95.1766% ( 23) 00:09:13.183 20614.051 - 20733.207: 95.3550% ( 21) 00:09:13.183 20733.207 - 20852.364: 95.5163% ( 19) 00:09:13.183 20852.364 - 20971.520: 95.7116% ( 23) 00:09:13.183 20971.520 - 21090.676: 95.8984% ( 22) 00:09:13.183 21090.676 - 21209.833: 96.0513% ( 18) 00:09:13.183 21209.833 - 21328.989: 96.2806% ( 27) 00:09:13.183 21328.989 - 21448.145: 96.4504% ( 20) 00:09:13.183 21448.145 - 21567.302: 96.6542% ( 24) 00:09:13.183 21567.302 - 21686.458: 96.8071% ( 18) 00:09:13.183 21686.458 - 21805.615: 97.0109% ( 24) 00:09:13.184 21805.615 - 21924.771: 97.1807% ( 20) 00:09:13.184 21924.771 - 22043.927: 97.3675% ( 22) 00:09:13.184 22043.927 - 22163.084: 97.5543% ( 22) 00:09:13.184 22163.084 - 22282.240: 97.7327% ( 21) 00:09:13.184 22282.240 - 22401.396: 97.9704% ( 28) 00:09:13.184 22401.396 - 22520.553: 98.1318% ( 19) 00:09:13.184 22520.553 - 22639.709: 98.3271% ( 23) 00:09:13.184 22639.709 - 22758.865: 98.4800% ( 18) 00:09:13.184 22758.865 - 22878.022: 98.5819% ( 12) 00:09:13.184 22878.022 - 22997.178: 98.6838% ( 12) 00:09:13.184 22997.178 - 23116.335: 98.7772% ( 11) 00:09:13.184 23116.335 - 23235.491: 98.8196% ( 5) 00:09:13.184 23235.491 - 23354.647: 98.8876% ( 8) 00:09:13.184 23354.647 - 23473.804: 98.9130% ( 3) 00:09:13.184 26095.244 - 26214.400: 98.9215% ( 1) 00:09:13.184 26214.400 - 26333.556: 98.9385% ( 2) 00:09:13.184 26333.556 - 26452.713: 98.9640% ( 3) 00:09:13.184 26452.713 - 26571.869: 98.9895% ( 3) 00:09:13.184 26571.869 - 26691.025: 99.0149% ( 3) 00:09:13.184 26691.025 - 26810.182: 99.0404% ( 3) 00:09:13.184 26810.182 - 26929.338: 99.0659% ( 3) 00:09:13.184 26929.338 - 27048.495: 99.0914% ( 3) 00:09:13.184 27048.495 - 27167.651: 99.1253% ( 4) 00:09:13.184 27167.651 - 27286.807: 99.1423% ( 2) 00:09:13.184 27286.807 - 27405.964: 99.1678% ( 3) 00:09:13.184 27405.964 - 27525.120: 99.2018% ( 4) 00:09:13.184 27525.120 - 27644.276: 99.2188% ( 2) 00:09:13.184 27644.276 - 27763.433: 99.2527% ( 4) 00:09:13.184 27763.433 - 27882.589: 99.2697% ( 2) 00:09:13.184 27882.589 - 28001.745: 99.2867% ( 2) 00:09:13.184 28001.745 - 28120.902: 99.3122% ( 3) 00:09:13.184 28120.902 - 28240.058: 99.3291% ( 2) 00:09:13.184 28240.058 - 28359.215: 99.3546% ( 3) 00:09:13.184 28359.215 - 28478.371: 99.3631% ( 1) 00:09:13.184 28478.371 - 28597.527: 99.3801% ( 2) 00:09:13.184 28597.527 - 28716.684: 99.4056% ( 3) 00:09:13.184 28716.684 - 28835.840: 99.4226% ( 2) 00:09:13.184 28835.840 - 28954.996: 99.4480% ( 3) 00:09:13.184 28954.996 - 29074.153: 99.4565% ( 1) 00:09:13.184 34793.658 - 35031.971: 99.4650% ( 1) 00:09:13.184 35031.971 - 35270.284: 99.5075% ( 5) 00:09:13.184 35270.284 - 35508.596: 99.5584% ( 6) 00:09:13.184 35508.596 - 35746.909: 99.6009% ( 5) 00:09:13.184 35746.909 - 35985.222: 99.6518% ( 6) 00:09:13.184 35985.222 - 36223.535: 99.6943% ( 5) 00:09:13.184 36223.535 - 36461.847: 99.7452% ( 6) 00:09:13.184 36461.847 - 36700.160: 99.7962% ( 6) 00:09:13.184 36700.160 - 36938.473: 99.8471% ( 6) 00:09:13.184 36938.473 - 37176.785: 99.8896% ( 5) 00:09:13.184 37176.785 - 37415.098: 99.9406% ( 6) 00:09:13.184 37415.098 - 37653.411: 99.9915% ( 6) 00:09:13.184 37653.411 - 37891.724: 100.0000% ( 1) 00:09:13.184 00:09:13.184 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:13.184 ============================================================================== 00:09:13.184 Range in us Cumulative IO count 00:09:13.184 7864.320 - 7923.898: 0.0170% ( 2) 00:09:13.184 7923.898 - 7983.476: 0.1019% ( 10) 00:09:13.184 7983.476 - 8043.055: 0.2632% ( 19) 00:09:13.184 8043.055 - 8102.633: 0.4161% ( 18) 00:09:13.184 8102.633 - 8162.211: 0.5944% ( 21) 00:09:13.184 8162.211 - 8221.789: 0.7643% ( 20) 00:09:13.184 8221.789 - 8281.367: 0.9511% ( 22) 00:09:13.184 8281.367 - 8340.945: 1.1804% ( 27) 00:09:13.184 8340.945 - 8400.524: 1.5455% ( 43) 00:09:13.184 8400.524 - 8460.102: 1.9192% ( 44) 00:09:13.184 8460.102 - 8519.680: 2.3607% ( 52) 00:09:13.184 8519.680 - 8579.258: 3.1080% ( 88) 00:09:13.184 8579.258 - 8638.836: 4.0166% ( 107) 00:09:13.184 8638.836 - 8698.415: 5.2225% ( 142) 00:09:13.184 8698.415 - 8757.993: 6.7086% ( 175) 00:09:13.184 8757.993 - 8817.571: 8.5173% ( 213) 00:09:13.184 8817.571 - 8877.149: 10.7337% ( 261) 00:09:13.184 8877.149 - 8936.727: 13.3067% ( 303) 00:09:13.184 8936.727 - 8996.305: 16.2449% ( 346) 00:09:13.184 8996.305 - 9055.884: 19.2510% ( 354) 00:09:13.184 9055.884 - 9115.462: 22.3760% ( 368) 00:09:13.184 9115.462 - 9175.040: 25.5605% ( 375) 00:09:13.184 9175.040 - 9234.618: 28.5496% ( 352) 00:09:13.184 9234.618 - 9294.196: 31.7340% ( 375) 00:09:13.184 9294.196 - 9353.775: 34.7147% ( 351) 00:09:13.184 9353.775 - 9413.353: 37.9076% ( 376) 00:09:13.184 9413.353 - 9472.931: 40.8628% ( 348) 00:09:13.184 9472.931 - 9532.509: 43.9963% ( 369) 00:09:13.184 9532.509 - 9592.087: 46.9684% ( 350) 00:09:13.184 9592.087 - 9651.665: 49.9151% ( 347) 00:09:13.184 9651.665 - 9711.244: 52.5900% ( 315) 00:09:13.184 9711.244 - 9770.822: 55.1461% ( 301) 00:09:13.184 9770.822 - 9830.400: 57.3115% ( 255) 00:09:13.184 9830.400 - 9889.978: 59.2986% ( 234) 00:09:13.184 9889.978 - 9949.556: 60.9800% ( 198) 00:09:13.184 9949.556 - 10009.135: 62.3981% ( 167) 00:09:13.184 10009.135 - 10068.713: 63.5700% ( 138) 00:09:13.184 10068.713 - 10128.291: 64.5041% ( 110) 00:09:13.184 10128.291 - 10187.869: 65.3787% ( 103) 00:09:13.184 10187.869 - 10247.447: 66.2279% ( 100) 00:09:13.184 10247.447 - 10307.025: 67.0771% ( 100) 00:09:13.184 10307.025 - 10366.604: 67.9348% ( 101) 00:09:13.184 10366.604 - 10426.182: 68.8774% ( 111) 00:09:13.184 10426.182 - 10485.760: 69.8115% ( 110) 00:09:13.184 10485.760 - 10545.338: 70.7965% ( 116) 00:09:13.184 10545.338 - 10604.916: 71.8071% ( 119) 00:09:13.184 10604.916 - 10664.495: 72.6817% ( 103) 00:09:13.184 10664.495 - 10724.073: 73.6073% ( 109) 00:09:13.184 10724.073 - 10783.651: 74.3801% ( 91) 00:09:13.184 10783.651 - 10843.229: 75.1953% ( 96) 00:09:13.184 10843.229 - 10902.807: 75.9426% ( 88) 00:09:13.184 10902.807 - 10962.385: 76.7154% ( 91) 00:09:13.184 10962.385 - 11021.964: 77.5476% ( 98) 00:09:13.184 11021.964 - 11081.542: 78.3203% ( 91) 00:09:13.184 11081.542 - 11141.120: 79.0591% ( 87) 00:09:13.184 11141.120 - 11200.698: 79.8149% ( 89) 00:09:13.184 11200.698 - 11260.276: 80.4942% ( 80) 00:09:13.184 11260.276 - 11319.855: 81.0717% ( 68) 00:09:13.184 11319.855 - 11379.433: 81.6406% ( 67) 00:09:13.184 11379.433 - 11439.011: 82.1586% ( 61) 00:09:13.184 11439.011 - 11498.589: 82.6172% ( 54) 00:09:13.184 11498.589 - 11558.167: 82.9654% ( 41) 00:09:13.184 11558.167 - 11617.745: 83.2880% ( 38) 00:09:13.184 11617.745 - 11677.324: 83.6787% ( 46) 00:09:13.184 11677.324 - 11736.902: 84.0268% ( 41) 00:09:13.184 11736.902 - 11796.480: 84.3325% ( 36) 00:09:13.184 11796.480 - 11856.058: 84.6043% ( 32) 00:09:13.184 11856.058 - 11915.636: 84.8930% ( 34) 00:09:13.184 11915.636 - 11975.215: 85.0883% ( 23) 00:09:13.184 11975.215 - 12034.793: 85.2751% ( 22) 00:09:13.184 12034.793 - 12094.371: 85.4789% ( 24) 00:09:13.184 12094.371 - 12153.949: 85.6827% ( 24) 00:09:13.184 12153.949 - 12213.527: 85.8526% ( 20) 00:09:13.184 12213.527 - 12273.105: 86.0224% ( 20) 00:09:13.184 12273.105 - 12332.684: 86.1923% ( 20) 00:09:13.184 12332.684 - 12392.262: 86.3536% ( 19) 00:09:13.184 12392.262 - 12451.840: 86.5319% ( 21) 00:09:13.184 12451.840 - 12511.418: 86.7018% ( 20) 00:09:13.184 12511.418 - 12570.996: 86.8631% ( 19) 00:09:13.184 12570.996 - 12630.575: 87.0499% ( 22) 00:09:13.184 12630.575 - 12690.153: 87.2368% ( 22) 00:09:13.184 12690.153 - 12749.731: 87.4406% ( 24) 00:09:13.184 12749.731 - 12809.309: 87.6529% ( 25) 00:09:13.184 12809.309 - 12868.887: 87.8736% ( 26) 00:09:13.184 12868.887 - 12928.465: 88.1029% ( 27) 00:09:13.184 12928.465 - 12988.044: 88.3492% ( 29) 00:09:13.184 12988.044 - 13047.622: 88.5530% ( 24) 00:09:13.184 13047.622 - 13107.200: 88.6974% ( 17) 00:09:13.184 13107.200 - 13166.778: 88.8587% ( 19) 00:09:13.184 13166.778 - 13226.356: 89.0200% ( 19) 00:09:13.184 13226.356 - 13285.935: 89.1984% ( 21) 00:09:13.184 13285.935 - 13345.513: 89.3767% ( 21) 00:09:13.184 13345.513 - 13405.091: 89.5211% ( 17) 00:09:13.184 13405.091 - 13464.669: 89.6824% ( 19) 00:09:13.184 13464.669 - 13524.247: 89.8353% ( 18) 00:09:13.184 13524.247 - 13583.825: 90.0051% ( 20) 00:09:13.184 13583.825 - 13643.404: 90.1410% ( 16) 00:09:13.184 13643.404 - 13702.982: 90.2514% ( 13) 00:09:13.184 13702.982 - 13762.560: 90.3448% ( 11) 00:09:13.184 13762.560 - 13822.138: 90.4552% ( 13) 00:09:13.184 13822.138 - 13881.716: 90.5486% ( 11) 00:09:13.184 13881.716 - 13941.295: 90.6250% ( 9) 00:09:13.184 13941.295 - 14000.873: 90.7014% ( 9) 00:09:13.184 14000.873 - 14060.451: 90.7779% ( 9) 00:09:13.184 14060.451 - 14120.029: 90.8373% ( 7) 00:09:13.184 14120.029 - 14179.607: 90.9052% ( 8) 00:09:13.184 14179.607 - 14239.185: 90.9732% ( 8) 00:09:13.184 14239.185 - 14298.764: 91.0411% ( 8) 00:09:13.184 14298.764 - 14358.342: 91.1005% ( 7) 00:09:13.184 14358.342 - 14417.920: 91.1600% ( 7) 00:09:13.184 14417.920 - 14477.498: 91.2109% ( 6) 00:09:13.184 14477.498 - 14537.076: 91.2874% ( 9) 00:09:13.184 14537.076 - 14596.655: 91.3298% ( 5) 00:09:13.184 14596.655 - 14656.233: 91.3723% ( 5) 00:09:13.184 14656.233 - 14715.811: 91.4232% ( 6) 00:09:13.184 14715.811 - 14775.389: 91.4402% ( 2) 00:09:13.184 14775.389 - 14834.967: 91.4572% ( 2) 00:09:13.184 14834.967 - 14894.545: 91.4742% ( 2) 00:09:13.184 14894.545 - 14954.124: 91.4912% ( 2) 00:09:13.184 14954.124 - 15013.702: 91.5082% ( 2) 00:09:13.184 15013.702 - 15073.280: 91.5336% ( 3) 00:09:13.184 15073.280 - 15132.858: 91.5506% ( 2) 00:09:13.184 15132.858 - 15192.436: 91.5676% ( 2) 00:09:13.184 15192.436 - 15252.015: 91.5846% ( 2) 00:09:13.184 15252.015 - 15371.171: 91.6101% ( 3) 00:09:13.184 15371.171 - 15490.327: 91.6525% ( 5) 00:09:13.184 15490.327 - 15609.484: 91.7204% ( 8) 00:09:13.184 15609.484 - 15728.640: 91.7799% ( 7) 00:09:13.184 15728.640 - 15847.796: 91.8393% ( 7) 00:09:13.184 15847.796 - 15966.953: 91.9158% ( 9) 00:09:13.184 15966.953 - 16086.109: 91.9752% ( 7) 00:09:13.184 16086.109 - 16205.265: 92.0346% ( 7) 00:09:13.184 16205.265 - 16324.422: 92.0686% ( 4) 00:09:13.184 16324.422 - 16443.578: 92.0941% ( 3) 00:09:13.184 16443.578 - 16562.735: 92.1281% ( 4) 00:09:13.184 16562.735 - 16681.891: 92.1535% ( 3) 00:09:13.184 16681.891 - 16801.047: 92.1790% ( 3) 00:09:13.184 16801.047 - 16920.204: 92.2130% ( 4) 00:09:13.184 16920.204 - 17039.360: 92.2385% ( 3) 00:09:13.184 17039.360 - 17158.516: 92.2724% ( 4) 00:09:13.184 17158.516 - 17277.673: 92.2979% ( 3) 00:09:13.185 17277.673 - 17396.829: 92.3234% ( 3) 00:09:13.185 17396.829 - 17515.985: 92.3573% ( 4) 00:09:13.185 17515.985 - 17635.142: 92.3913% ( 4) 00:09:13.185 17992.611 - 18111.767: 92.3998% ( 1) 00:09:13.185 18111.767 - 18230.924: 92.4338% ( 4) 00:09:13.185 18230.924 - 18350.080: 92.4592% ( 3) 00:09:13.185 18350.080 - 18469.236: 92.4847% ( 3) 00:09:13.185 18469.236 - 18588.393: 92.5187% ( 4) 00:09:13.185 18588.393 - 18707.549: 92.5442% ( 3) 00:09:13.185 18707.549 - 18826.705: 92.5696% ( 3) 00:09:13.185 18826.705 - 18945.862: 92.6036% ( 4) 00:09:13.185 18945.862 - 19065.018: 92.6291% ( 3) 00:09:13.185 19065.018 - 19184.175: 92.6800% ( 6) 00:09:13.185 19184.175 - 19303.331: 92.8159% ( 16) 00:09:13.185 19303.331 - 19422.487: 92.9603% ( 17) 00:09:13.185 19422.487 - 19541.644: 93.1216% ( 19) 00:09:13.185 19541.644 - 19660.800: 93.3339% ( 25) 00:09:13.185 19660.800 - 19779.956: 93.5717% ( 28) 00:09:13.185 19779.956 - 19899.113: 93.8264% ( 30) 00:09:13.185 19899.113 - 20018.269: 94.0727% ( 29) 00:09:13.185 20018.269 - 20137.425: 94.3359% ( 31) 00:09:13.185 20137.425 - 20256.582: 94.5907% ( 30) 00:09:13.185 20256.582 - 20375.738: 94.8200% ( 27) 00:09:13.185 20375.738 - 20494.895: 95.0493% ( 27) 00:09:13.185 20494.895 - 20614.051: 95.2870% ( 28) 00:09:13.185 20614.051 - 20733.207: 95.4908% ( 24) 00:09:13.185 20733.207 - 20852.364: 95.7201% ( 27) 00:09:13.185 20852.364 - 20971.520: 95.9494% ( 27) 00:09:13.185 20971.520 - 21090.676: 96.1617% ( 25) 00:09:13.185 21090.676 - 21209.833: 96.4079% ( 29) 00:09:13.185 21209.833 - 21328.989: 96.6202% ( 25) 00:09:13.185 21328.989 - 21448.145: 96.8410% ( 26) 00:09:13.185 21448.145 - 21567.302: 97.0279% ( 22) 00:09:13.185 21567.302 - 21686.458: 97.2232% ( 23) 00:09:13.185 21686.458 - 21805.615: 97.4185% ( 23) 00:09:13.185 21805.615 - 21924.771: 97.6138% ( 23) 00:09:13.185 21924.771 - 22043.927: 97.7921% ( 21) 00:09:13.185 22043.927 - 22163.084: 98.0044% ( 25) 00:09:13.185 22163.084 - 22282.240: 98.2082% ( 24) 00:09:13.185 22282.240 - 22401.396: 98.3865% ( 21) 00:09:13.185 22401.396 - 22520.553: 98.5224% ( 16) 00:09:13.185 22520.553 - 22639.709: 98.6243% ( 12) 00:09:13.185 22639.709 - 22758.865: 98.7177% ( 11) 00:09:13.185 22758.865 - 22878.022: 98.8111% ( 11) 00:09:13.185 22878.022 - 22997.178: 98.8791% ( 8) 00:09:13.185 22997.178 - 23116.335: 98.9130% ( 4) 00:09:13.185 24784.524 - 24903.680: 98.9300% ( 2) 00:09:13.185 24903.680 - 25022.836: 98.9555% ( 3) 00:09:13.185 25022.836 - 25141.993: 98.9810% ( 3) 00:09:13.185 25141.993 - 25261.149: 99.0065% ( 3) 00:09:13.185 25261.149 - 25380.305: 99.0404% ( 4) 00:09:13.185 25380.305 - 25499.462: 99.0574% ( 2) 00:09:13.185 25499.462 - 25618.618: 99.0829% ( 3) 00:09:13.185 25618.618 - 25737.775: 99.1084% ( 3) 00:09:13.185 25737.775 - 25856.931: 99.1423% ( 4) 00:09:13.185 25856.931 - 25976.087: 99.1678% ( 3) 00:09:13.185 25976.087 - 26095.244: 99.1933% ( 3) 00:09:13.185 26095.244 - 26214.400: 99.2272% ( 4) 00:09:13.185 26214.400 - 26333.556: 99.2527% ( 3) 00:09:13.185 26333.556 - 26452.713: 99.2782% ( 3) 00:09:13.185 26452.713 - 26571.869: 99.3122% ( 4) 00:09:13.185 26571.869 - 26691.025: 99.3376% ( 3) 00:09:13.185 26691.025 - 26810.182: 99.3631% ( 3) 00:09:13.185 26810.182 - 26929.338: 99.3886% ( 3) 00:09:13.185 26929.338 - 27048.495: 99.4141% ( 3) 00:09:13.185 27048.495 - 27167.651: 99.4480% ( 4) 00:09:13.185 27167.651 - 27286.807: 99.4565% ( 1) 00:09:13.185 32410.531 - 32648.844: 99.4820% ( 3) 00:09:13.185 32648.844 - 32887.156: 99.5329% ( 6) 00:09:13.185 32887.156 - 33125.469: 99.5839% ( 6) 00:09:13.185 33125.469 - 33363.782: 99.6349% ( 6) 00:09:13.185 33363.782 - 33602.095: 99.6858% ( 6) 00:09:13.185 33602.095 - 33840.407: 99.7283% ( 5) 00:09:13.185 33840.407 - 34078.720: 99.7792% ( 6) 00:09:13.185 34078.720 - 34317.033: 99.8387% ( 7) 00:09:13.185 34317.033 - 34555.345: 99.8811% ( 5) 00:09:13.185 34555.345 - 34793.658: 99.9321% ( 6) 00:09:13.185 34793.658 - 35031.971: 99.9915% ( 7) 00:09:13.185 35031.971 - 35270.284: 100.0000% ( 1) 00:09:13.185 00:09:13.185 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:13.185 ============================================================================== 00:09:13.185 Range in us Cumulative IO count 00:09:13.185 7864.320 - 7923.898: 0.0085% ( 1) 00:09:13.185 7923.898 - 7983.476: 0.0934% ( 10) 00:09:13.185 7983.476 - 8043.055: 0.2293% ( 16) 00:09:13.185 8043.055 - 8102.633: 0.4076% ( 21) 00:09:13.185 8102.633 - 8162.211: 0.5774% ( 20) 00:09:13.185 8162.211 - 8221.789: 0.7728% ( 23) 00:09:13.185 8221.789 - 8281.367: 0.9766% ( 24) 00:09:13.185 8281.367 - 8340.945: 1.2058% ( 27) 00:09:13.185 8340.945 - 8400.524: 1.5285% ( 38) 00:09:13.185 8400.524 - 8460.102: 1.9446% ( 49) 00:09:13.185 8460.102 - 8519.680: 2.4202% ( 56) 00:09:13.185 8519.680 - 8579.258: 3.0571% ( 75) 00:09:13.185 8579.258 - 8638.836: 3.9317% ( 103) 00:09:13.185 8638.836 - 8698.415: 5.0357% ( 130) 00:09:13.185 8698.415 - 8757.993: 6.5557% ( 179) 00:09:13.185 8757.993 - 8817.571: 8.4069% ( 218) 00:09:13.185 8817.571 - 8877.149: 10.7167% ( 272) 00:09:13.185 8877.149 - 8936.727: 13.2133% ( 294) 00:09:13.185 8936.727 - 8996.305: 16.2024% ( 352) 00:09:13.185 8996.305 - 9055.884: 19.3020% ( 365) 00:09:13.185 9055.884 - 9115.462: 22.3421% ( 358) 00:09:13.185 9115.462 - 9175.040: 25.4161% ( 362) 00:09:13.185 9175.040 - 9234.618: 28.4137% ( 353) 00:09:13.185 9234.618 - 9294.196: 31.5048% ( 364) 00:09:13.185 9294.196 - 9353.775: 34.5279% ( 356) 00:09:13.185 9353.775 - 9413.353: 37.7463% ( 379) 00:09:13.185 9413.353 - 9472.931: 40.9392% ( 376) 00:09:13.185 9472.931 - 9532.509: 44.0557% ( 367) 00:09:13.185 9532.509 - 9592.087: 47.1467% ( 364) 00:09:13.185 9592.087 - 9651.665: 50.1444% ( 353) 00:09:13.185 9651.665 - 9711.244: 52.9042% ( 325) 00:09:13.185 9711.244 - 9770.822: 55.4093% ( 295) 00:09:13.185 9770.822 - 9830.400: 57.6596% ( 265) 00:09:13.185 9830.400 - 9889.978: 59.5363% ( 221) 00:09:13.185 9889.978 - 9949.556: 61.1498% ( 190) 00:09:13.185 9949.556 - 10009.135: 62.4490% ( 153) 00:09:13.185 10009.135 - 10068.713: 63.5020% ( 124) 00:09:13.185 10068.713 - 10128.291: 64.4531% ( 112) 00:09:13.185 10128.291 - 10187.869: 65.3702% ( 108) 00:09:13.185 10187.869 - 10247.447: 66.2874% ( 108) 00:09:13.185 10247.447 - 10307.025: 67.1790% ( 105) 00:09:13.185 10307.025 - 10366.604: 68.1216% ( 111) 00:09:13.185 10366.604 - 10426.182: 69.1151% ( 117) 00:09:13.185 10426.182 - 10485.760: 70.0153% ( 106) 00:09:13.185 10485.760 - 10545.338: 70.9069% ( 105) 00:09:13.185 10545.338 - 10604.916: 71.6712% ( 90) 00:09:13.185 10604.916 - 10664.495: 72.5034% ( 98) 00:09:13.185 10664.495 - 10724.073: 73.3696% ( 102) 00:09:13.185 10724.073 - 10783.651: 74.2188% ( 100) 00:09:13.185 10783.651 - 10843.229: 75.0594% ( 99) 00:09:13.185 10843.229 - 10902.807: 75.9086% ( 100) 00:09:13.185 10902.807 - 10962.385: 76.7323% ( 97) 00:09:13.185 10962.385 - 11021.964: 77.5306% ( 94) 00:09:13.185 11021.964 - 11081.542: 78.3288% ( 94) 00:09:13.185 11081.542 - 11141.120: 79.1270% ( 94) 00:09:13.185 11141.120 - 11200.698: 79.8319% ( 83) 00:09:13.185 11200.698 - 11260.276: 80.5027% ( 79) 00:09:13.185 11260.276 - 11319.855: 81.0377% ( 63) 00:09:13.185 11319.855 - 11379.433: 81.4963% ( 54) 00:09:13.185 11379.433 - 11439.011: 81.9718% ( 56) 00:09:13.185 11439.011 - 11498.589: 82.4049% ( 51) 00:09:13.185 11498.589 - 11558.167: 82.8125% ( 48) 00:09:13.185 11558.167 - 11617.745: 83.1776% ( 43) 00:09:13.185 11617.745 - 11677.324: 83.5428% ( 43) 00:09:13.185 11677.324 - 11736.902: 83.8995% ( 42) 00:09:13.185 11736.902 - 11796.480: 84.2646% ( 43) 00:09:13.185 11796.480 - 11856.058: 84.5873% ( 38) 00:09:13.185 11856.058 - 11915.636: 84.8845% ( 35) 00:09:13.185 11915.636 - 11975.215: 85.1987% ( 37) 00:09:13.185 11975.215 - 12034.793: 85.4365% ( 28) 00:09:13.185 12034.793 - 12094.371: 85.6997% ( 31) 00:09:13.185 12094.371 - 12153.949: 85.9290% ( 27) 00:09:13.185 12153.949 - 12213.527: 86.1583% ( 27) 00:09:13.185 12213.527 - 12273.105: 86.3791% ( 26) 00:09:13.185 12273.105 - 12332.684: 86.5744% ( 23) 00:09:13.185 12332.684 - 12392.262: 86.7442% ( 20) 00:09:13.185 12392.262 - 12451.840: 86.8971% ( 18) 00:09:13.185 12451.840 - 12511.418: 87.0414% ( 17) 00:09:13.185 12511.418 - 12570.996: 87.1518% ( 13) 00:09:13.185 12570.996 - 12630.575: 87.2707% ( 14) 00:09:13.185 12630.575 - 12690.153: 87.3726% ( 12) 00:09:13.185 12690.153 - 12749.731: 87.5170% ( 17) 00:09:13.185 12749.731 - 12809.309: 87.7208% ( 24) 00:09:13.185 12809.309 - 12868.887: 87.8991% ( 21) 00:09:13.185 12868.887 - 12928.465: 88.0859% ( 22) 00:09:13.185 12928.465 - 12988.044: 88.2388% ( 18) 00:09:13.185 12988.044 - 13047.622: 88.3832% ( 17) 00:09:13.185 13047.622 - 13107.200: 88.5360% ( 18) 00:09:13.185 13107.200 - 13166.778: 88.7143% ( 21) 00:09:13.185 13166.778 - 13226.356: 88.9012% ( 22) 00:09:13.185 13226.356 - 13285.935: 89.1050% ( 24) 00:09:13.185 13285.935 - 13345.513: 89.3003% ( 23) 00:09:13.185 13345.513 - 13405.091: 89.5041% ( 24) 00:09:13.185 13405.091 - 13464.669: 89.6739% ( 20) 00:09:13.185 13464.669 - 13524.247: 89.8438% ( 20) 00:09:13.185 13524.247 - 13583.825: 90.0136% ( 20) 00:09:13.185 13583.825 - 13643.404: 90.1919% ( 21) 00:09:13.185 13643.404 - 13702.982: 90.3618% ( 20) 00:09:13.185 13702.982 - 13762.560: 90.4976% ( 16) 00:09:13.185 13762.560 - 13822.138: 90.5995% ( 12) 00:09:13.185 13822.138 - 13881.716: 90.6675% ( 8) 00:09:13.185 13881.716 - 13941.295: 90.7524% ( 10) 00:09:13.185 13941.295 - 14000.873: 90.8373% ( 10) 00:09:13.185 14000.873 - 14060.451: 90.9052% ( 8) 00:09:13.185 14060.451 - 14120.029: 90.9817% ( 9) 00:09:13.185 14120.029 - 14179.607: 91.0326% ( 6) 00:09:13.185 14179.607 - 14239.185: 91.1005% ( 8) 00:09:13.185 14239.185 - 14298.764: 91.1600% ( 7) 00:09:13.185 14298.764 - 14358.342: 91.2194% ( 7) 00:09:13.185 14358.342 - 14417.920: 91.2704% ( 6) 00:09:13.185 14417.920 - 14477.498: 91.3383% ( 8) 00:09:13.185 14477.498 - 14537.076: 91.3893% ( 6) 00:09:13.185 14537.076 - 14596.655: 91.4232% ( 4) 00:09:13.185 14596.655 - 14656.233: 91.4657% ( 5) 00:09:13.185 14656.233 - 14715.811: 91.5082% ( 5) 00:09:13.186 14715.811 - 14775.389: 91.5591% ( 6) 00:09:13.186 14775.389 - 14834.967: 91.6016% ( 5) 00:09:13.186 14834.967 - 14894.545: 91.6355% ( 4) 00:09:13.186 14894.545 - 14954.124: 91.6610% ( 3) 00:09:13.186 14954.124 - 15013.702: 91.6950% ( 4) 00:09:13.186 15013.702 - 15073.280: 91.7289% ( 4) 00:09:13.186 15073.280 - 15132.858: 91.7544% ( 3) 00:09:13.186 15132.858 - 15192.436: 91.7884% ( 4) 00:09:13.186 15192.436 - 15252.015: 91.8224% ( 4) 00:09:13.186 15252.015 - 15371.171: 91.8903% ( 8) 00:09:13.186 15371.171 - 15490.327: 91.9497% ( 7) 00:09:13.186 15490.327 - 15609.484: 92.0092% ( 7) 00:09:13.186 15609.484 - 15728.640: 92.0771% ( 8) 00:09:13.186 15728.640 - 15847.796: 92.1365% ( 7) 00:09:13.186 15847.796 - 15966.953: 92.2045% ( 8) 00:09:13.186 15966.953 - 16086.109: 92.2385% ( 4) 00:09:13.186 16086.109 - 16205.265: 92.2639% ( 3) 00:09:13.186 16205.265 - 16324.422: 92.2979% ( 4) 00:09:13.186 16324.422 - 16443.578: 92.3319% ( 4) 00:09:13.186 16443.578 - 16562.735: 92.3573% ( 3) 00:09:13.186 16562.735 - 16681.891: 92.3913% ( 4) 00:09:13.186 18826.705 - 18945.862: 92.4083% ( 2) 00:09:13.186 18945.862 - 19065.018: 92.4338% ( 3) 00:09:13.186 19065.018 - 19184.175: 92.5017% ( 8) 00:09:13.186 19184.175 - 19303.331: 92.6036% ( 12) 00:09:13.186 19303.331 - 19422.487: 92.7310% ( 15) 00:09:13.186 19422.487 - 19541.644: 92.9178% ( 22) 00:09:13.186 19541.644 - 19660.800: 93.1046% ( 22) 00:09:13.186 19660.800 - 19779.956: 93.3339% ( 27) 00:09:13.186 19779.956 - 19899.113: 93.5547% ( 26) 00:09:13.186 19899.113 - 20018.269: 93.8264% ( 32) 00:09:13.186 20018.269 - 20137.425: 94.0557% ( 27) 00:09:13.186 20137.425 - 20256.582: 94.3190% ( 31) 00:09:13.186 20256.582 - 20375.738: 94.5482% ( 27) 00:09:13.186 20375.738 - 20494.895: 94.8030% ( 30) 00:09:13.186 20494.895 - 20614.051: 95.0747% ( 32) 00:09:13.186 20614.051 - 20733.207: 95.3125% ( 28) 00:09:13.186 20733.207 - 20852.364: 95.5757% ( 31) 00:09:13.186 20852.364 - 20971.520: 95.8305% ( 30) 00:09:13.186 20971.520 - 21090.676: 96.0938% ( 31) 00:09:13.186 21090.676 - 21209.833: 96.3145% ( 26) 00:09:13.186 21209.833 - 21328.989: 96.5438% ( 27) 00:09:13.186 21328.989 - 21448.145: 96.7646% ( 26) 00:09:13.186 21448.145 - 21567.302: 96.9599% ( 23) 00:09:13.186 21567.302 - 21686.458: 97.1552% ( 23) 00:09:13.186 21686.458 - 21805.615: 97.3505% ( 23) 00:09:13.186 21805.615 - 21924.771: 97.5459% ( 23) 00:09:13.186 21924.771 - 22043.927: 97.7412% ( 23) 00:09:13.186 22043.927 - 22163.084: 97.9450% ( 24) 00:09:13.186 22163.084 - 22282.240: 98.1233% ( 21) 00:09:13.186 22282.240 - 22401.396: 98.2931% ( 20) 00:09:13.186 22401.396 - 22520.553: 98.4545% ( 19) 00:09:13.186 22520.553 - 22639.709: 98.5734% ( 14) 00:09:13.186 22639.709 - 22758.865: 98.6838% ( 13) 00:09:13.186 22758.865 - 22878.022: 98.7942% ( 13) 00:09:13.186 22878.022 - 22997.178: 98.8961% ( 12) 00:09:13.186 22997.178 - 23116.335: 98.9810% ( 10) 00:09:13.186 23116.335 - 23235.491: 99.0234% ( 5) 00:09:13.186 23235.491 - 23354.647: 99.0574% ( 4) 00:09:13.186 23354.647 - 23473.804: 99.0744% ( 2) 00:09:13.186 23473.804 - 23592.960: 99.0999% ( 3) 00:09:13.186 23592.960 - 23712.116: 99.1338% ( 4) 00:09:13.186 23712.116 - 23831.273: 99.1508% ( 2) 00:09:13.186 23831.273 - 23950.429: 99.1763% ( 3) 00:09:13.186 23950.429 - 24069.585: 99.2018% ( 3) 00:09:13.186 24069.585 - 24188.742: 99.2188% ( 2) 00:09:13.186 24188.742 - 24307.898: 99.2442% ( 3) 00:09:13.186 24307.898 - 24427.055: 99.2612% ( 2) 00:09:13.186 24427.055 - 24546.211: 99.2867% ( 3) 00:09:13.186 24546.211 - 24665.367: 99.3122% ( 3) 00:09:13.186 24665.367 - 24784.524: 99.3376% ( 3) 00:09:13.186 24784.524 - 24903.680: 99.3716% ( 4) 00:09:13.186 24903.680 - 25022.836: 99.3971% ( 3) 00:09:13.186 25022.836 - 25141.993: 99.4226% ( 3) 00:09:13.186 25141.993 - 25261.149: 99.4480% ( 3) 00:09:13.186 25261.149 - 25380.305: 99.4565% ( 1) 00:09:13.186 30384.873 - 30504.029: 99.4650% ( 1) 00:09:13.186 30504.029 - 30742.342: 99.5075% ( 5) 00:09:13.186 30742.342 - 30980.655: 99.5584% ( 6) 00:09:13.186 30980.655 - 31218.967: 99.6094% ( 6) 00:09:13.186 31218.967 - 31457.280: 99.6603% ( 6) 00:09:13.186 31457.280 - 31695.593: 99.7113% ( 6) 00:09:13.186 31695.593 - 31933.905: 99.7622% ( 6) 00:09:13.186 31933.905 - 32172.218: 99.8132% ( 6) 00:09:13.186 32172.218 - 32410.531: 99.8641% ( 6) 00:09:13.186 32410.531 - 32648.844: 99.9151% ( 6) 00:09:13.186 32648.844 - 32887.156: 99.9745% ( 7) 00:09:13.186 32887.156 - 33125.469: 100.0000% ( 3) 00:09:13.186 00:09:13.186 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:13.186 ============================================================================== 00:09:13.186 Range in us Cumulative IO count 00:09:13.186 7923.898 - 7983.476: 0.0849% ( 10) 00:09:13.186 7983.476 - 8043.055: 0.2123% ( 15) 00:09:13.186 8043.055 - 8102.633: 0.3906% ( 21) 00:09:13.186 8102.633 - 8162.211: 0.5774% ( 22) 00:09:13.186 8162.211 - 8221.789: 0.7728% ( 23) 00:09:13.186 8221.789 - 8281.367: 0.9681% ( 23) 00:09:13.186 8281.367 - 8340.945: 1.1889% ( 26) 00:09:13.186 8340.945 - 8400.524: 1.5200% ( 39) 00:09:13.186 8400.524 - 8460.102: 1.9446% ( 50) 00:09:13.186 8460.102 - 8519.680: 2.4372% ( 58) 00:09:13.186 8519.680 - 8579.258: 3.0401% ( 71) 00:09:13.186 8579.258 - 8638.836: 3.8808% ( 99) 00:09:13.186 8638.836 - 8698.415: 5.0017% ( 132) 00:09:13.186 8698.415 - 8757.993: 6.4453% ( 170) 00:09:13.186 8757.993 - 8817.571: 8.3560% ( 225) 00:09:13.186 8817.571 - 8877.149: 10.6658% ( 272) 00:09:13.186 8877.149 - 8936.727: 13.2388% ( 303) 00:09:13.186 8936.727 - 8996.305: 16.2449% ( 354) 00:09:13.186 8996.305 - 9055.884: 19.3614% ( 367) 00:09:13.186 9055.884 - 9115.462: 22.3590% ( 353) 00:09:13.186 9115.462 - 9175.040: 25.3651% ( 354) 00:09:13.186 9175.040 - 9234.618: 28.4477% ( 363) 00:09:13.186 9234.618 - 9294.196: 31.5217% ( 362) 00:09:13.186 9294.196 - 9353.775: 34.6213% ( 365) 00:09:13.186 9353.775 - 9413.353: 37.7972% ( 374) 00:09:13.186 9413.353 - 9472.931: 41.0156% ( 379) 00:09:13.186 9472.931 - 9532.509: 44.1831% ( 373) 00:09:13.186 9532.509 - 9592.087: 47.2062% ( 356) 00:09:13.186 9592.087 - 9651.665: 50.2038% ( 353) 00:09:13.186 9651.665 - 9711.244: 52.9806% ( 327) 00:09:13.186 9711.244 - 9770.822: 55.5282% ( 300) 00:09:13.186 9770.822 - 9830.400: 57.7361% ( 260) 00:09:13.186 9830.400 - 9889.978: 59.7571% ( 238) 00:09:13.186 9889.978 - 9949.556: 61.3451% ( 187) 00:09:13.186 9949.556 - 10009.135: 62.6529% ( 154) 00:09:13.186 10009.135 - 10068.713: 63.8332% ( 139) 00:09:13.186 10068.713 - 10128.291: 64.7843% ( 112) 00:09:13.186 10128.291 - 10187.869: 65.6250% ( 99) 00:09:13.186 10187.869 - 10247.447: 66.4487% ( 97) 00:09:13.186 10247.447 - 10307.025: 67.3319% ( 104) 00:09:13.186 10307.025 - 10366.604: 68.1810% ( 100) 00:09:13.186 10366.604 - 10426.182: 69.1576% ( 115) 00:09:13.186 10426.182 - 10485.760: 70.0493% ( 105) 00:09:13.186 10485.760 - 10545.338: 70.9494% ( 106) 00:09:13.186 10545.338 - 10604.916: 71.8665% ( 108) 00:09:13.186 10604.916 - 10664.495: 72.6817% ( 96) 00:09:13.186 10664.495 - 10724.073: 73.5734% ( 105) 00:09:13.186 10724.073 - 10783.651: 74.4310% ( 101) 00:09:13.186 10783.651 - 10843.229: 75.2463% ( 96) 00:09:13.186 10843.229 - 10902.807: 76.0105% ( 90) 00:09:13.186 10902.807 - 10962.385: 76.7663% ( 89) 00:09:13.186 10962.385 - 11021.964: 77.5900% ( 97) 00:09:13.186 11021.964 - 11081.542: 78.3798% ( 93) 00:09:13.186 11081.542 - 11141.120: 79.2289% ( 100) 00:09:13.186 11141.120 - 11200.698: 79.9677% ( 87) 00:09:13.186 11200.698 - 11260.276: 80.7150% ( 88) 00:09:13.186 11260.276 - 11319.855: 81.3604% ( 76) 00:09:13.186 11319.855 - 11379.433: 81.9293% ( 67) 00:09:13.186 11379.433 - 11439.011: 82.4558% ( 62) 00:09:13.186 11439.011 - 11498.589: 82.8804% ( 50) 00:09:13.186 11498.589 - 11558.167: 83.2880% ( 48) 00:09:13.186 11558.167 - 11617.745: 83.6107% ( 38) 00:09:13.186 11617.745 - 11677.324: 83.9504% ( 40) 00:09:13.186 11677.324 - 11736.902: 84.2901% ( 40) 00:09:13.186 11736.902 - 11796.480: 84.5958% ( 36) 00:09:13.186 11796.480 - 11856.058: 84.8675% ( 32) 00:09:13.186 11856.058 - 11915.636: 85.0968% ( 27) 00:09:13.186 11915.636 - 11975.215: 85.3176% ( 26) 00:09:13.186 11975.215 - 12034.793: 85.5214% ( 24) 00:09:13.186 12034.793 - 12094.371: 85.6997% ( 21) 00:09:13.186 12094.371 - 12153.949: 85.8696% ( 20) 00:09:13.186 12153.949 - 12213.527: 85.9800% ( 13) 00:09:13.186 12213.527 - 12273.105: 86.1073% ( 15) 00:09:13.186 12273.105 - 12332.684: 86.2602% ( 18) 00:09:13.186 12332.684 - 12392.262: 86.4046% ( 17) 00:09:13.186 12392.262 - 12451.840: 86.5489% ( 17) 00:09:13.186 12451.840 - 12511.418: 86.7018% ( 18) 00:09:13.186 12511.418 - 12570.996: 86.8546% ( 18) 00:09:13.186 12570.996 - 12630.575: 87.0160% ( 19) 00:09:13.186 12630.575 - 12690.153: 87.1349% ( 14) 00:09:13.186 12690.153 - 12749.731: 87.2452% ( 13) 00:09:13.186 12749.731 - 12809.309: 87.3556% ( 13) 00:09:13.186 12809.309 - 12868.887: 87.4321% ( 9) 00:09:13.186 12868.887 - 12928.465: 87.5510% ( 14) 00:09:13.186 12928.465 - 12988.044: 87.6783% ( 15) 00:09:13.186 12988.044 - 13047.622: 87.8227% ( 17) 00:09:13.186 13047.622 - 13107.200: 87.9671% ( 17) 00:09:13.186 13107.200 - 13166.778: 88.1114% ( 17) 00:09:13.186 13166.778 - 13226.356: 88.2643% ( 18) 00:09:13.186 13226.356 - 13285.935: 88.4341% ( 20) 00:09:13.186 13285.935 - 13345.513: 88.6549% ( 26) 00:09:13.186 13345.513 - 13405.091: 88.8417% ( 22) 00:09:13.186 13405.091 - 13464.669: 89.0710% ( 27) 00:09:13.186 13464.669 - 13524.247: 89.2833% ( 25) 00:09:13.186 13524.247 - 13583.825: 89.4956% ( 25) 00:09:13.186 13583.825 - 13643.404: 89.7334% ( 28) 00:09:13.186 13643.404 - 13702.982: 89.9202% ( 22) 00:09:13.186 13702.982 - 13762.560: 90.0900% ( 20) 00:09:13.186 13762.560 - 13822.138: 90.2514% ( 19) 00:09:13.186 13822.138 - 13881.716: 90.3872% ( 16) 00:09:13.186 13881.716 - 13941.295: 90.5231% ( 16) 00:09:13.186 13941.295 - 14000.873: 90.6505% ( 15) 00:09:13.186 14000.873 - 14060.451: 90.7779% ( 15) 00:09:13.187 14060.451 - 14120.029: 90.8967% ( 14) 00:09:13.187 14120.029 - 14179.607: 90.9986% ( 12) 00:09:13.187 14179.607 - 14239.185: 91.0836% ( 10) 00:09:13.187 14239.185 - 14298.764: 91.1515% ( 8) 00:09:13.187 14298.764 - 14358.342: 91.2279% ( 9) 00:09:13.187 14358.342 - 14417.920: 91.2704% ( 5) 00:09:13.187 14417.920 - 14477.498: 91.3043% ( 4) 00:09:13.187 14477.498 - 14537.076: 91.3553% ( 6) 00:09:13.187 14537.076 - 14596.655: 91.4062% ( 6) 00:09:13.187 14596.655 - 14656.233: 91.4572% ( 6) 00:09:13.187 14656.233 - 14715.811: 91.4997% ( 5) 00:09:13.187 14715.811 - 14775.389: 91.5506% ( 6) 00:09:13.187 14775.389 - 14834.967: 91.6101% ( 7) 00:09:13.187 14834.967 - 14894.545: 91.6525% ( 5) 00:09:13.187 14894.545 - 14954.124: 91.7035% ( 6) 00:09:13.187 14954.124 - 15013.702: 91.7544% ( 6) 00:09:13.187 15013.702 - 15073.280: 91.7884% ( 4) 00:09:13.187 15073.280 - 15132.858: 91.8393% ( 6) 00:09:13.187 15132.858 - 15192.436: 91.8903% ( 6) 00:09:13.187 15192.436 - 15252.015: 91.9497% ( 7) 00:09:13.187 15252.015 - 15371.171: 92.0262% ( 9) 00:09:13.187 15371.171 - 15490.327: 92.1026% ( 9) 00:09:13.187 15490.327 - 15609.484: 92.1875% ( 10) 00:09:13.187 15609.484 - 15728.640: 92.2554% ( 8) 00:09:13.187 15728.640 - 15847.796: 92.3149% ( 7) 00:09:13.187 15847.796 - 15966.953: 92.3573% ( 5) 00:09:13.187 15966.953 - 16086.109: 92.3913% ( 4) 00:09:13.187 18826.705 - 18945.862: 92.3998% ( 1) 00:09:13.187 18945.862 - 19065.018: 92.4423% ( 5) 00:09:13.187 19065.018 - 19184.175: 92.4932% ( 6) 00:09:13.187 19184.175 - 19303.331: 92.5951% ( 12) 00:09:13.187 19303.331 - 19422.487: 92.7055% ( 13) 00:09:13.187 19422.487 - 19541.644: 92.8499% ( 17) 00:09:13.187 19541.644 - 19660.800: 93.0282% ( 21) 00:09:13.187 19660.800 - 19779.956: 93.2150% ( 22) 00:09:13.187 19779.956 - 19899.113: 93.4103% ( 23) 00:09:13.187 19899.113 - 20018.269: 93.6396% ( 27) 00:09:13.187 20018.269 - 20137.425: 93.8774% ( 28) 00:09:13.187 20137.425 - 20256.582: 94.1491% ( 32) 00:09:13.187 20256.582 - 20375.738: 94.4378% ( 34) 00:09:13.187 20375.738 - 20494.895: 94.7011% ( 31) 00:09:13.187 20494.895 - 20614.051: 95.0153% ( 37) 00:09:13.187 20614.051 - 20733.207: 95.2700% ( 30) 00:09:13.187 20733.207 - 20852.364: 95.5588% ( 34) 00:09:13.187 20852.364 - 20971.520: 95.8390% ( 33) 00:09:13.187 20971.520 - 21090.676: 96.1192% ( 33) 00:09:13.187 21090.676 - 21209.833: 96.3655% ( 29) 00:09:13.187 21209.833 - 21328.989: 96.6202% ( 30) 00:09:13.187 21328.989 - 21448.145: 96.8495% ( 27) 00:09:13.187 21448.145 - 21567.302: 97.1043% ( 30) 00:09:13.187 21567.302 - 21686.458: 97.3675% ( 31) 00:09:13.187 21686.458 - 21805.615: 97.6223% ( 30) 00:09:13.187 21805.615 - 21924.771: 97.8516% ( 27) 00:09:13.187 21924.771 - 22043.927: 98.1063% ( 30) 00:09:13.187 22043.927 - 22163.084: 98.3696% ( 31) 00:09:13.187 22163.084 - 22282.240: 98.6073% ( 28) 00:09:13.187 22282.240 - 22401.396: 98.7942% ( 22) 00:09:13.187 22401.396 - 22520.553: 98.9640% ( 20) 00:09:13.187 22520.553 - 22639.709: 99.0999% ( 16) 00:09:13.187 22639.709 - 22758.865: 99.2357% ( 16) 00:09:13.187 22758.865 - 22878.022: 99.3122% ( 9) 00:09:13.187 22878.022 - 22997.178: 99.3801% ( 8) 00:09:13.187 22997.178 - 23116.335: 99.4310% ( 6) 00:09:13.187 23116.335 - 23235.491: 99.4480% ( 2) 00:09:13.187 23235.491 - 23354.647: 99.4565% ( 1) 00:09:13.187 27763.433 - 27882.589: 99.4650% ( 1) 00:09:13.187 27882.589 - 28001.745: 99.4905% ( 3) 00:09:13.187 28001.745 - 28120.902: 99.5160% ( 3) 00:09:13.187 28120.902 - 28240.058: 99.5414% ( 3) 00:09:13.187 28240.058 - 28359.215: 99.5669% ( 3) 00:09:13.187 28359.215 - 28478.371: 99.5924% ( 3) 00:09:13.187 28478.371 - 28597.527: 99.6179% ( 3) 00:09:13.187 28597.527 - 28716.684: 99.6433% ( 3) 00:09:13.187 28716.684 - 28835.840: 99.6688% ( 3) 00:09:13.187 28835.840 - 28954.996: 99.6943% ( 3) 00:09:13.187 28954.996 - 29074.153: 99.7198% ( 3) 00:09:13.187 29074.153 - 29193.309: 99.7452% ( 3) 00:09:13.187 29193.309 - 29312.465: 99.7707% ( 3) 00:09:13.187 29312.465 - 29431.622: 99.7962% ( 3) 00:09:13.187 29431.622 - 29550.778: 99.8217% ( 3) 00:09:13.187 29550.778 - 29669.935: 99.8471% ( 3) 00:09:13.187 29669.935 - 29789.091: 99.8726% ( 3) 00:09:13.187 29789.091 - 29908.247: 99.9066% ( 4) 00:09:13.187 29908.247 - 30027.404: 99.9321% ( 3) 00:09:13.187 30027.404 - 30146.560: 99.9575% ( 3) 00:09:13.187 30146.560 - 30265.716: 99.9830% ( 3) 00:09:13.187 30265.716 - 30384.873: 100.0000% ( 2) 00:09:13.187 00:09:13.187 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:13.187 ============================================================================== 00:09:13.187 Range in us Cumulative IO count 00:09:13.187 7864.320 - 7923.898: 0.0085% ( 1) 00:09:13.187 7923.898 - 7983.476: 0.1019% ( 11) 00:09:13.187 7983.476 - 8043.055: 0.2548% ( 18) 00:09:13.187 8043.055 - 8102.633: 0.4246% ( 20) 00:09:13.187 8102.633 - 8162.211: 0.6029% ( 21) 00:09:13.187 8162.211 - 8221.789: 0.7812% ( 21) 00:09:13.187 8221.789 - 8281.367: 1.0190% ( 28) 00:09:13.187 8281.367 - 8340.945: 1.2908% ( 32) 00:09:13.187 8340.945 - 8400.524: 1.6135% ( 38) 00:09:13.187 8400.524 - 8460.102: 2.0296% ( 49) 00:09:13.187 8460.102 - 8519.680: 2.5391% ( 60) 00:09:13.187 8519.680 - 8579.258: 3.1844% ( 76) 00:09:13.187 8579.258 - 8638.836: 3.9997% ( 96) 00:09:13.187 8638.836 - 8698.415: 5.1206% ( 132) 00:09:13.187 8698.415 - 8757.993: 6.5642% ( 170) 00:09:13.187 8757.993 - 8817.571: 8.5088% ( 229) 00:09:13.187 8817.571 - 8877.149: 10.7507% ( 264) 00:09:13.187 8877.149 - 8936.727: 13.2897% ( 299) 00:09:13.187 8936.727 - 8996.305: 16.3808% ( 364) 00:09:13.187 8996.305 - 9055.884: 19.4633% ( 363) 00:09:13.187 9055.884 - 9115.462: 22.5968% ( 369) 00:09:13.187 9115.462 - 9175.040: 25.6878% ( 364) 00:09:13.187 9175.040 - 9234.618: 28.7704% ( 363) 00:09:13.187 9234.618 - 9294.196: 31.7680% ( 353) 00:09:13.187 9294.196 - 9353.775: 35.0119% ( 382) 00:09:13.187 9353.775 - 9413.353: 38.0774% ( 361) 00:09:13.187 9413.353 - 9472.931: 41.2449% ( 373) 00:09:13.187 9472.931 - 9532.509: 44.3529% ( 366) 00:09:13.187 9532.509 - 9592.087: 47.4100% ( 360) 00:09:13.187 9592.087 - 9651.665: 50.3397% ( 345) 00:09:13.187 9651.665 - 9711.244: 53.1760% ( 334) 00:09:13.187 9711.244 - 9770.822: 55.7829% ( 307) 00:09:13.187 9770.822 - 9830.400: 58.0503% ( 267) 00:09:13.187 9830.400 - 9889.978: 59.9779% ( 227) 00:09:13.187 9889.978 - 9949.556: 61.4725% ( 176) 00:09:13.187 9949.556 - 10009.135: 62.7632% ( 152) 00:09:13.187 10009.135 - 10068.713: 63.8162% ( 124) 00:09:13.187 10068.713 - 10128.291: 64.6909% ( 103) 00:09:13.187 10128.291 - 10187.869: 65.5146% ( 97) 00:09:13.187 10187.869 - 10247.447: 66.2619% ( 88) 00:09:13.187 10247.447 - 10307.025: 67.0601% ( 94) 00:09:13.187 10307.025 - 10366.604: 67.8753% ( 96) 00:09:13.187 10366.604 - 10426.182: 68.7925% ( 108) 00:09:13.187 10426.182 - 10485.760: 69.6756% ( 104) 00:09:13.187 10485.760 - 10545.338: 70.6097% ( 110) 00:09:13.187 10545.338 - 10604.916: 71.5693% ( 113) 00:09:13.187 10604.916 - 10664.495: 72.5289% ( 113) 00:09:13.187 10664.495 - 10724.073: 73.4375% ( 107) 00:09:13.187 10724.073 - 10783.651: 74.2952% ( 101) 00:09:13.187 10783.651 - 10843.229: 75.1019% ( 95) 00:09:13.187 10843.229 - 10902.807: 75.9851% ( 104) 00:09:13.187 10902.807 - 10962.385: 76.8342% ( 100) 00:09:13.187 10962.385 - 11021.964: 77.7174% ( 104) 00:09:13.187 11021.964 - 11081.542: 78.5496% ( 98) 00:09:13.187 11081.542 - 11141.120: 79.3224% ( 91) 00:09:13.187 11141.120 - 11200.698: 80.0526% ( 86) 00:09:13.187 11200.698 - 11260.276: 80.7235% ( 79) 00:09:13.187 11260.276 - 11319.855: 81.3349% ( 72) 00:09:13.187 11319.855 - 11379.433: 81.9039% ( 67) 00:09:13.187 11379.433 - 11439.011: 82.4558% ( 65) 00:09:13.187 11439.011 - 11498.589: 82.9059% ( 53) 00:09:13.187 11498.589 - 11558.167: 83.2541% ( 41) 00:09:13.187 11558.167 - 11617.745: 83.6022% ( 41) 00:09:13.187 11617.745 - 11677.324: 83.9334% ( 39) 00:09:13.187 11677.324 - 11736.902: 84.2986% ( 43) 00:09:13.187 11736.902 - 11796.480: 84.5958% ( 35) 00:09:13.187 11796.480 - 11856.058: 84.8336% ( 28) 00:09:13.187 11856.058 - 11915.636: 85.0204% ( 22) 00:09:13.187 11915.636 - 11975.215: 85.2072% ( 22) 00:09:13.187 11975.215 - 12034.793: 85.3855% ( 21) 00:09:13.187 12034.793 - 12094.371: 85.5639% ( 21) 00:09:13.187 12094.371 - 12153.949: 85.7422% ( 21) 00:09:13.187 12153.949 - 12213.527: 85.8865% ( 17) 00:09:13.187 12213.527 - 12273.105: 86.0309% ( 17) 00:09:13.187 12273.105 - 12332.684: 86.1668% ( 16) 00:09:13.187 12332.684 - 12392.262: 86.2942% ( 15) 00:09:13.187 12392.262 - 12451.840: 86.4385% ( 17) 00:09:13.187 12451.840 - 12511.418: 86.5574% ( 14) 00:09:13.187 12511.418 - 12570.996: 86.7018% ( 17) 00:09:13.187 12570.996 - 12630.575: 86.8631% ( 19) 00:09:13.187 12630.575 - 12690.153: 86.9820% ( 14) 00:09:13.187 12690.153 - 12749.731: 87.1264% ( 17) 00:09:13.187 12749.731 - 12809.309: 87.2537% ( 15) 00:09:13.188 12809.309 - 12868.887: 87.3811% ( 15) 00:09:13.188 12868.887 - 12928.465: 87.4915% ( 13) 00:09:13.188 12928.465 - 12988.044: 87.6019% ( 13) 00:09:13.188 12988.044 - 13047.622: 87.7378% ( 16) 00:09:13.188 13047.622 - 13107.200: 87.8567% ( 14) 00:09:13.188 13107.200 - 13166.778: 88.0350% ( 21) 00:09:13.188 13166.778 - 13226.356: 88.1963% ( 19) 00:09:13.188 13226.356 - 13285.935: 88.4001% ( 24) 00:09:13.188 13285.935 - 13345.513: 88.6294% ( 27) 00:09:13.188 13345.513 - 13405.091: 88.8162% ( 22) 00:09:13.188 13405.091 - 13464.669: 89.0031% ( 22) 00:09:13.188 13464.669 - 13524.247: 89.2238% ( 26) 00:09:13.188 13524.247 - 13583.825: 89.4276% ( 24) 00:09:13.188 13583.825 - 13643.404: 89.6399% ( 25) 00:09:13.188 13643.404 - 13702.982: 89.8438% ( 24) 00:09:13.188 13702.982 - 13762.560: 89.9711% ( 15) 00:09:13.188 13762.560 - 13822.138: 90.0815% ( 13) 00:09:13.188 13822.138 - 13881.716: 90.2174% ( 16) 00:09:13.188 13881.716 - 13941.295: 90.3278% ( 13) 00:09:13.188 13941.295 - 14000.873: 90.4297% ( 12) 00:09:13.188 14000.873 - 14060.451: 90.5571% ( 15) 00:09:13.188 14060.451 - 14120.029: 90.6844% ( 15) 00:09:13.188 14120.029 - 14179.607: 90.7948% ( 13) 00:09:13.188 14179.607 - 14239.185: 90.8628% ( 8) 00:09:13.188 14239.185 - 14298.764: 90.9477% ( 10) 00:09:13.188 14298.764 - 14358.342: 91.0326% ( 10) 00:09:13.188 14358.342 - 14417.920: 91.1175% ( 10) 00:09:13.188 14417.920 - 14477.498: 91.1770% ( 7) 00:09:13.188 14477.498 - 14537.076: 91.2194% ( 5) 00:09:13.188 14537.076 - 14596.655: 91.2619% ( 5) 00:09:13.188 14596.655 - 14656.233: 91.3128% ( 6) 00:09:13.188 14656.233 - 14715.811: 91.3468% ( 4) 00:09:13.188 14715.811 - 14775.389: 91.3978% ( 6) 00:09:13.188 14775.389 - 14834.967: 91.4487% ( 6) 00:09:13.188 14834.967 - 14894.545: 91.4912% ( 5) 00:09:13.188 14894.545 - 14954.124: 91.5421% ( 6) 00:09:13.188 14954.124 - 15013.702: 91.6016% ( 7) 00:09:13.188 15013.702 - 15073.280: 91.6440% ( 5) 00:09:13.188 15073.280 - 15132.858: 91.6780% ( 4) 00:09:13.188 15132.858 - 15192.436: 91.7120% ( 4) 00:09:13.188 15192.436 - 15252.015: 91.7374% ( 3) 00:09:13.188 15252.015 - 15371.171: 91.8054% ( 8) 00:09:13.188 15371.171 - 15490.327: 91.8733% ( 8) 00:09:13.188 15490.327 - 15609.484: 91.9327% ( 7) 00:09:13.188 15609.484 - 15728.640: 92.0007% ( 8) 00:09:13.188 15728.640 - 15847.796: 92.0601% ( 7) 00:09:13.188 15847.796 - 15966.953: 92.1281% ( 8) 00:09:13.188 15966.953 - 16086.109: 92.1875% ( 7) 00:09:13.188 16086.109 - 16205.265: 92.2385% ( 6) 00:09:13.188 16205.265 - 16324.422: 92.2724% ( 4) 00:09:13.188 16324.422 - 16443.578: 92.2979% ( 3) 00:09:13.188 16443.578 - 16562.735: 92.3319% ( 4) 00:09:13.188 16562.735 - 16681.891: 92.3488% ( 2) 00:09:13.188 16681.891 - 16801.047: 92.3573% ( 1) 00:09:13.188 16801.047 - 16920.204: 92.3828% ( 3) 00:09:13.188 16920.204 - 17039.360: 92.3913% ( 1) 00:09:13.188 17515.985 - 17635.142: 92.3998% ( 1) 00:09:13.188 17635.142 - 17754.298: 92.4253% ( 3) 00:09:13.188 17754.298 - 17873.455: 92.4507% ( 3) 00:09:13.188 17873.455 - 17992.611: 92.4762% ( 3) 00:09:13.188 17992.611 - 18111.767: 92.5017% ( 3) 00:09:13.188 18111.767 - 18230.924: 92.5272% ( 3) 00:09:13.188 18230.924 - 18350.080: 92.5526% ( 3) 00:09:13.188 18350.080 - 18469.236: 92.5951% ( 5) 00:09:13.188 18469.236 - 18588.393: 92.6546% ( 7) 00:09:13.188 18588.393 - 18707.549: 92.7140% ( 7) 00:09:13.188 18707.549 - 18826.705: 92.7649% ( 6) 00:09:13.188 18826.705 - 18945.862: 92.8159% ( 6) 00:09:13.188 18945.862 - 19065.018: 92.8838% ( 8) 00:09:13.188 19065.018 - 19184.175: 92.9772% ( 11) 00:09:13.188 19184.175 - 19303.331: 93.1046% ( 15) 00:09:13.188 19303.331 - 19422.487: 93.2320% ( 15) 00:09:13.188 19422.487 - 19541.644: 93.4358% ( 24) 00:09:13.188 19541.644 - 19660.800: 93.6141% ( 21) 00:09:13.188 19660.800 - 19779.956: 93.8094% ( 23) 00:09:13.188 19779.956 - 19899.113: 94.0472% ( 28) 00:09:13.188 19899.113 - 20018.269: 94.3105% ( 31) 00:09:13.188 20018.269 - 20137.425: 94.5822% ( 32) 00:09:13.188 20137.425 - 20256.582: 94.8285% ( 29) 00:09:13.188 20256.582 - 20375.738: 95.0832% ( 30) 00:09:13.188 20375.738 - 20494.895: 95.3380% ( 30) 00:09:13.188 20494.895 - 20614.051: 95.5673% ( 27) 00:09:13.188 20614.051 - 20733.207: 95.7626% ( 23) 00:09:13.188 20733.207 - 20852.364: 95.9579% ( 23) 00:09:13.188 20852.364 - 20971.520: 96.1787% ( 26) 00:09:13.188 20971.520 - 21090.676: 96.3910% ( 25) 00:09:13.188 21090.676 - 21209.833: 96.6372% ( 29) 00:09:13.188 21209.833 - 21328.989: 96.8325% ( 23) 00:09:13.188 21328.989 - 21448.145: 97.0363% ( 24) 00:09:13.188 21448.145 - 21567.302: 97.2656% ( 27) 00:09:13.188 21567.302 - 21686.458: 97.4694% ( 24) 00:09:13.188 21686.458 - 21805.615: 97.6817% ( 25) 00:09:13.188 21805.615 - 21924.771: 97.8855% ( 24) 00:09:13.188 21924.771 - 22043.927: 98.1318% ( 29) 00:09:13.188 22043.927 - 22163.084: 98.3186% ( 22) 00:09:13.188 22163.084 - 22282.240: 98.5394% ( 26) 00:09:13.188 22282.240 - 22401.396: 98.7517% ( 25) 00:09:13.188 22401.396 - 22520.553: 98.9130% ( 19) 00:09:13.188 22520.553 - 22639.709: 99.0404% ( 15) 00:09:13.188 22639.709 - 22758.865: 99.1678% ( 15) 00:09:13.188 22758.865 - 22878.022: 99.2612% ( 11) 00:09:13.188 22878.022 - 22997.178: 99.3461% ( 10) 00:09:13.188 22997.178 - 23116.335: 99.4141% ( 8) 00:09:13.188 23116.335 - 23235.491: 99.4395% ( 3) 00:09:13.188 23235.491 - 23354.647: 99.4565% ( 2) 00:09:13.188 25141.993 - 25261.149: 99.4650% ( 1) 00:09:13.188 25261.149 - 25380.305: 99.4820% ( 2) 00:09:13.188 25380.305 - 25499.462: 99.5075% ( 3) 00:09:13.188 25499.462 - 25618.618: 99.5329% ( 3) 00:09:13.188 25618.618 - 25737.775: 99.5584% ( 3) 00:09:13.188 25737.775 - 25856.931: 99.5839% ( 3) 00:09:13.188 25856.931 - 25976.087: 99.6009% ( 2) 00:09:13.188 25976.087 - 26095.244: 99.6264% ( 3) 00:09:13.188 26095.244 - 26214.400: 99.6603% ( 4) 00:09:13.188 26214.400 - 26333.556: 99.6858% ( 3) 00:09:13.188 26333.556 - 26452.713: 99.7113% ( 3) 00:09:13.188 26452.713 - 26571.869: 99.7368% ( 3) 00:09:13.188 26571.869 - 26691.025: 99.7622% ( 3) 00:09:13.188 26691.025 - 26810.182: 99.7877% ( 3) 00:09:13.188 26810.182 - 26929.338: 99.8132% ( 3) 00:09:13.188 26929.338 - 27048.495: 99.8387% ( 3) 00:09:13.188 27048.495 - 27167.651: 99.8641% ( 3) 00:09:13.188 27167.651 - 27286.807: 99.8896% ( 3) 00:09:13.188 27286.807 - 27405.964: 99.9151% ( 3) 00:09:13.188 27405.964 - 27525.120: 99.9406% ( 3) 00:09:13.188 27525.120 - 27644.276: 99.9660% ( 3) 00:09:13.188 27644.276 - 27763.433: 99.9915% ( 3) 00:09:13.188 27763.433 - 27882.589: 100.0000% ( 1) 00:09:13.188 00:09:13.188 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:13.188 ============================================================================== 00:09:13.188 Range in us Cumulative IO count 00:09:13.188 7923.898 - 7983.476: 0.0849% ( 10) 00:09:13.188 7983.476 - 8043.055: 0.2463% ( 19) 00:09:13.188 8043.055 - 8102.633: 0.3906% ( 17) 00:09:13.188 8102.633 - 8162.211: 0.5435% ( 18) 00:09:13.188 8162.211 - 8221.789: 0.7218% ( 21) 00:09:13.188 8221.789 - 8281.367: 0.9256% ( 24) 00:09:13.188 8281.367 - 8340.945: 1.1464% ( 26) 00:09:13.188 8340.945 - 8400.524: 1.4606% ( 37) 00:09:13.188 8400.524 - 8460.102: 1.8767% ( 49) 00:09:13.188 8460.102 - 8519.680: 2.3692% ( 58) 00:09:13.188 8519.680 - 8579.258: 3.0231% ( 77) 00:09:13.188 8579.258 - 8638.836: 3.8213% ( 94) 00:09:13.188 8638.836 - 8698.415: 4.9253% ( 130) 00:09:13.188 8698.415 - 8757.993: 6.4113% ( 175) 00:09:13.188 8757.993 - 8817.571: 8.2880% ( 221) 00:09:13.188 8817.571 - 8877.149: 10.5299% ( 264) 00:09:13.188 8877.149 - 8936.727: 13.1539% ( 309) 00:09:13.188 8936.727 - 8996.305: 16.1685% ( 355) 00:09:13.188 8996.305 - 9055.884: 19.2765% ( 366) 00:09:13.188 9055.884 - 9115.462: 22.2911% ( 355) 00:09:13.188 9115.462 - 9175.040: 25.3397% ( 359) 00:09:13.188 9175.040 - 9234.618: 28.4477% ( 366) 00:09:13.188 9234.618 - 9294.196: 31.6661% ( 379) 00:09:13.188 9294.196 - 9353.775: 34.8251% ( 372) 00:09:13.188 9353.775 - 9413.353: 38.0690% ( 382) 00:09:13.188 9413.353 - 9472.931: 41.3553% ( 387) 00:09:13.188 9472.931 - 9532.509: 44.4718% ( 367) 00:09:13.188 9532.509 - 9592.087: 47.6308% ( 372) 00:09:13.188 9592.087 - 9651.665: 50.5265% ( 341) 00:09:13.188 9651.665 - 9711.244: 53.3882% ( 337) 00:09:13.188 9711.244 - 9770.822: 55.9018% ( 296) 00:09:13.188 9770.822 - 9830.400: 58.0588% ( 254) 00:09:13.188 9830.400 - 9889.978: 59.8845% ( 215) 00:09:13.188 9889.978 - 9949.556: 61.3961% ( 178) 00:09:13.188 9949.556 - 10009.135: 62.6444% ( 147) 00:09:13.188 10009.135 - 10068.713: 63.6634% ( 120) 00:09:13.188 10068.713 - 10128.291: 64.5126% ( 100) 00:09:13.188 10128.291 - 10187.869: 65.3533% ( 99) 00:09:13.188 10187.869 - 10247.447: 66.1940% ( 99) 00:09:13.188 10247.447 - 10307.025: 67.0177% ( 97) 00:09:13.188 10307.025 - 10366.604: 67.9008% ( 104) 00:09:13.188 10366.604 - 10426.182: 68.8094% ( 107) 00:09:13.188 10426.182 - 10485.760: 69.7860% ( 115) 00:09:13.188 10485.760 - 10545.338: 70.7456% ( 113) 00:09:13.188 10545.338 - 10604.916: 71.7391% ( 117) 00:09:13.188 10604.916 - 10664.495: 72.6817% ( 111) 00:09:13.188 10664.495 - 10724.073: 73.5988% ( 108) 00:09:13.188 10724.073 - 10783.651: 74.4310% ( 98) 00:09:13.188 10783.651 - 10843.229: 75.2717% ( 99) 00:09:13.188 10843.229 - 10902.807: 76.0954% ( 97) 00:09:13.188 10902.807 - 10962.385: 76.8257% ( 86) 00:09:13.188 10962.385 - 11021.964: 77.6834% ( 101) 00:09:13.188 11021.964 - 11081.542: 78.4562% ( 91) 00:09:13.188 11081.542 - 11141.120: 79.2035% ( 88) 00:09:13.188 11141.120 - 11200.698: 79.9168% ( 84) 00:09:13.188 11200.698 - 11260.276: 80.6301% ( 84) 00:09:13.188 11260.276 - 11319.855: 81.2925% ( 78) 00:09:13.188 11319.855 - 11379.433: 81.8869% ( 70) 00:09:13.188 11379.433 - 11439.011: 82.3964% ( 60) 00:09:13.188 11439.011 - 11498.589: 82.8210% ( 50) 00:09:13.188 11498.589 - 11558.167: 83.1776% ( 42) 00:09:13.188 11558.167 - 11617.745: 83.5088% ( 39) 00:09:13.188 11617.745 - 11677.324: 83.8485% ( 40) 00:09:13.188 11677.324 - 11736.902: 84.1372% ( 34) 00:09:13.188 11736.902 - 11796.480: 84.4175% ( 33) 00:09:13.188 11796.480 - 11856.058: 84.7147% ( 35) 00:09:13.189 11856.058 - 11915.636: 84.9864% ( 32) 00:09:13.189 11915.636 - 11975.215: 85.2836% ( 35) 00:09:13.189 11975.215 - 12034.793: 85.5554% ( 32) 00:09:13.189 12034.793 - 12094.371: 85.7846% ( 27) 00:09:13.189 12094.371 - 12153.949: 86.0054% ( 26) 00:09:13.189 12153.949 - 12213.527: 86.1753% ( 20) 00:09:13.189 12213.527 - 12273.105: 86.3111% ( 16) 00:09:13.189 12273.105 - 12332.684: 86.4300% ( 14) 00:09:13.189 12332.684 - 12392.262: 86.5574% ( 15) 00:09:13.189 12392.262 - 12451.840: 86.6848% ( 15) 00:09:13.189 12451.840 - 12511.418: 86.8037% ( 14) 00:09:13.189 12511.418 - 12570.996: 86.9226% ( 14) 00:09:13.189 12570.996 - 12630.575: 87.0839% ( 19) 00:09:13.189 12630.575 - 12690.153: 87.2028% ( 14) 00:09:13.189 12690.153 - 12749.731: 87.3471% ( 17) 00:09:13.189 12749.731 - 12809.309: 87.5085% ( 19) 00:09:13.189 12809.309 - 12868.887: 87.6444% ( 16) 00:09:13.189 12868.887 - 12928.465: 87.8227% ( 21) 00:09:13.189 12928.465 - 12988.044: 87.9671% ( 17) 00:09:13.189 12988.044 - 13047.622: 88.1199% ( 18) 00:09:13.189 13047.622 - 13107.200: 88.2558% ( 16) 00:09:13.189 13107.200 - 13166.778: 88.3916% ( 16) 00:09:13.189 13166.778 - 13226.356: 88.5275% ( 16) 00:09:13.189 13226.356 - 13285.935: 88.6974% ( 20) 00:09:13.189 13285.935 - 13345.513: 88.8587% ( 19) 00:09:13.189 13345.513 - 13405.091: 89.0455% ( 22) 00:09:13.189 13405.091 - 13464.669: 89.2323% ( 22) 00:09:13.189 13464.669 - 13524.247: 89.4446% ( 25) 00:09:13.189 13524.247 - 13583.825: 89.5805% ( 16) 00:09:13.189 13583.825 - 13643.404: 89.7079% ( 15) 00:09:13.189 13643.404 - 13702.982: 89.8013% ( 11) 00:09:13.189 13702.982 - 13762.560: 89.8777% ( 9) 00:09:13.189 13762.560 - 13822.138: 89.9796% ( 12) 00:09:13.189 13822.138 - 13881.716: 90.0900% ( 13) 00:09:13.189 13881.716 - 13941.295: 90.2004% ( 13) 00:09:13.189 13941.295 - 14000.873: 90.3108% ( 13) 00:09:13.189 14000.873 - 14060.451: 90.4297% ( 14) 00:09:13.189 14060.451 - 14120.029: 90.5401% ( 13) 00:09:13.189 14120.029 - 14179.607: 90.6420% ( 12) 00:09:13.189 14179.607 - 14239.185: 90.7099% ( 8) 00:09:13.189 14239.185 - 14298.764: 90.7524% ( 5) 00:09:13.189 14298.764 - 14358.342: 90.7863% ( 4) 00:09:13.189 14358.342 - 14417.920: 90.8288% ( 5) 00:09:13.189 14417.920 - 14477.498: 90.8628% ( 4) 00:09:13.189 14477.498 - 14537.076: 90.9222% ( 7) 00:09:13.189 14537.076 - 14596.655: 90.9901% ( 8) 00:09:13.189 14596.655 - 14656.233: 91.0326% ( 5) 00:09:13.189 14656.233 - 14715.811: 91.0666% ( 4) 00:09:13.189 14715.811 - 14775.389: 91.1005% ( 4) 00:09:13.189 14775.389 - 14834.967: 91.1345% ( 4) 00:09:13.189 14834.967 - 14894.545: 91.1685% ( 4) 00:09:13.189 14894.545 - 14954.124: 91.2024% ( 4) 00:09:13.189 14954.124 - 15013.702: 91.2449% ( 5) 00:09:13.189 15013.702 - 15073.280: 91.2789% ( 4) 00:09:13.189 15073.280 - 15132.858: 91.3128% ( 4) 00:09:13.189 15132.858 - 15192.436: 91.3553% ( 5) 00:09:13.189 15192.436 - 15252.015: 91.4062% ( 6) 00:09:13.189 15252.015 - 15371.171: 91.5082% ( 12) 00:09:13.189 15371.171 - 15490.327: 91.6016% ( 11) 00:09:13.189 15490.327 - 15609.484: 91.7035% ( 12) 00:09:13.189 15609.484 - 15728.640: 91.7969% ( 11) 00:09:13.189 15728.640 - 15847.796: 91.9158% ( 14) 00:09:13.189 15847.796 - 15966.953: 92.0262% ( 13) 00:09:13.189 15966.953 - 16086.109: 92.1111% ( 10) 00:09:13.189 16086.109 - 16205.265: 92.1960% ( 10) 00:09:13.189 16205.265 - 16324.422: 92.2894% ( 11) 00:09:13.189 16324.422 - 16443.578: 92.3488% ( 7) 00:09:13.189 16443.578 - 16562.735: 92.3998% ( 6) 00:09:13.189 16562.735 - 16681.891: 92.4592% ( 7) 00:09:13.189 16681.891 - 16801.047: 92.5102% ( 6) 00:09:13.189 16801.047 - 16920.204: 92.5611% ( 6) 00:09:13.189 16920.204 - 17039.360: 92.6206% ( 7) 00:09:13.189 17039.360 - 17158.516: 92.6715% ( 6) 00:09:13.189 17158.516 - 17277.673: 92.7225% ( 6) 00:09:13.189 17277.673 - 17396.829: 92.7819% ( 7) 00:09:13.189 17396.829 - 17515.985: 92.8329% ( 6) 00:09:13.189 17515.985 - 17635.142: 92.8753% ( 5) 00:09:13.189 17635.142 - 17754.298: 92.9263% ( 6) 00:09:13.189 17754.298 - 17873.455: 92.9772% ( 6) 00:09:13.189 17873.455 - 17992.611: 93.0112% ( 4) 00:09:13.189 17992.611 - 18111.767: 93.0452% ( 4) 00:09:13.189 18111.767 - 18230.924: 93.0707% ( 3) 00:09:13.189 18230.924 - 18350.080: 93.1046% ( 4) 00:09:13.189 18350.080 - 18469.236: 93.1301% ( 3) 00:09:13.189 18469.236 - 18588.393: 93.1641% ( 4) 00:09:13.189 18588.393 - 18707.549: 93.1895% ( 3) 00:09:13.189 18707.549 - 18826.705: 93.2235% ( 4) 00:09:13.189 18826.705 - 18945.862: 93.2490% ( 3) 00:09:13.189 18945.862 - 19065.018: 93.2914% ( 5) 00:09:13.189 19065.018 - 19184.175: 93.3424% ( 6) 00:09:13.189 19184.175 - 19303.331: 93.4613% ( 14) 00:09:13.189 19303.331 - 19422.487: 93.5802% ( 14) 00:09:13.189 19422.487 - 19541.644: 93.7160% ( 16) 00:09:13.189 19541.644 - 19660.800: 93.8774% ( 19) 00:09:13.189 19660.800 - 19779.956: 94.0472% ( 20) 00:09:13.189 19779.956 - 19899.113: 94.2510% ( 24) 00:09:13.189 19899.113 - 20018.269: 94.4633% ( 25) 00:09:13.189 20018.269 - 20137.425: 94.6671% ( 24) 00:09:13.189 20137.425 - 20256.582: 94.8879% ( 26) 00:09:13.189 20256.582 - 20375.738: 95.1172% ( 27) 00:09:13.189 20375.738 - 20494.895: 95.3210% ( 24) 00:09:13.189 20494.895 - 20614.051: 95.5418% ( 26) 00:09:13.189 20614.051 - 20733.207: 95.7541% ( 25) 00:09:13.189 20733.207 - 20852.364: 95.9664% ( 25) 00:09:13.189 20852.364 - 20971.520: 96.1872% ( 26) 00:09:13.189 20971.520 - 21090.676: 96.3910% ( 24) 00:09:13.189 21090.676 - 21209.833: 96.5948% ( 24) 00:09:13.189 21209.833 - 21328.989: 96.8156% ( 26) 00:09:13.189 21328.989 - 21448.145: 97.0363% ( 26) 00:09:13.189 21448.145 - 21567.302: 97.2571% ( 26) 00:09:13.189 21567.302 - 21686.458: 97.4609% ( 24) 00:09:13.189 21686.458 - 21805.615: 97.6732% ( 25) 00:09:13.189 21805.615 - 21924.771: 97.9110% ( 28) 00:09:13.189 21924.771 - 22043.927: 98.1148% ( 24) 00:09:13.189 22043.927 - 22163.084: 98.3271% ( 25) 00:09:13.189 22163.084 - 22282.240: 98.5564% ( 27) 00:09:13.189 22282.240 - 22401.396: 98.7517% ( 23) 00:09:13.189 22401.396 - 22520.553: 98.9300% ( 21) 00:09:13.189 22520.553 - 22639.709: 99.0574% ( 15) 00:09:13.189 22639.709 - 22758.865: 99.1848% ( 15) 00:09:13.189 22758.865 - 22878.022: 99.3122% ( 15) 00:09:13.189 22878.022 - 22997.178: 99.4310% ( 14) 00:09:13.189 22997.178 - 23116.335: 99.5160% ( 10) 00:09:13.189 23116.335 - 23235.491: 99.5584% ( 5) 00:09:13.189 23235.491 - 23354.647: 99.6094% ( 6) 00:09:13.189 23354.647 - 23473.804: 99.6264% ( 2) 00:09:13.189 23473.804 - 23592.960: 99.6518% ( 3) 00:09:13.189 23592.960 - 23712.116: 99.6773% ( 3) 00:09:13.189 23712.116 - 23831.273: 99.7028% ( 3) 00:09:13.189 23831.273 - 23950.429: 99.7283% ( 3) 00:09:13.189 23950.429 - 24069.585: 99.7537% ( 3) 00:09:13.189 24069.585 - 24188.742: 99.7792% ( 3) 00:09:13.189 24188.742 - 24307.898: 99.8047% ( 3) 00:09:13.189 24307.898 - 24427.055: 99.8302% ( 3) 00:09:13.189 24427.055 - 24546.211: 99.8641% ( 4) 00:09:13.189 24546.211 - 24665.367: 99.8896% ( 3) 00:09:13.189 24665.367 - 24784.524: 99.9151% ( 3) 00:09:13.189 24784.524 - 24903.680: 99.9406% ( 3) 00:09:13.189 24903.680 - 25022.836: 99.9575% ( 2) 00:09:13.189 25022.836 - 25141.993: 99.9830% ( 3) 00:09:13.189 25141.993 - 25261.149: 100.0000% ( 2) 00:09:13.189 00:09:13.189 17:59:29 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:09:14.595 Initializing NVMe Controllers 00:09:14.595 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:09:14.595 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:09:14.595 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:09:14.595 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:09:14.595 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:09:14.595 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:09:14.595 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:09:14.595 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:09:14.595 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:09:14.595 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:09:14.595 Initialization complete. Launching workers. 00:09:14.595 ======================================================== 00:09:14.595 Latency(us) 00:09:14.595 Device Information : IOPS MiB/s Average min max 00:09:14.595 PCIE (0000:00:10.0) NSID 1 from core 0: 11038.81 129.36 11618.72 9249.46 42123.03 00:09:14.595 PCIE (0000:00:11.0) NSID 1 from core 0: 11038.81 129.36 11592.78 9399.69 39588.19 00:09:14.595 PCIE (0000:00:13.0) NSID 1 from core 0: 11038.81 129.36 11566.35 9339.88 37743.46 00:09:14.595 PCIE (0000:00:12.0) NSID 1 from core 0: 11038.81 129.36 11539.95 9333.26 35286.46 00:09:14.595 PCIE (0000:00:12.0) NSID 2 from core 0: 11038.81 129.36 11513.50 9336.62 32915.04 00:09:14.595 PCIE (0000:00:12.0) NSID 3 from core 0: 11038.81 129.36 11486.78 9349.12 30490.97 00:09:14.595 ======================================================== 00:09:14.595 Total : 66232.84 776.17 11553.01 9249.46 42123.03 00:09:14.595 00:09:14.595 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:14.595 ================================================================================= 00:09:14.595 1.00000% : 9592.087us 00:09:14.595 10.00000% : 10128.291us 00:09:14.595 25.00000% : 10604.916us 00:09:14.595 50.00000% : 11260.276us 00:09:14.595 75.00000% : 12094.371us 00:09:14.595 90.00000% : 12809.309us 00:09:14.595 95.00000% : 13226.356us 00:09:14.595 98.00000% : 14000.873us 00:09:14.595 99.00000% : 31457.280us 00:09:14.595 99.50000% : 39798.225us 00:09:14.595 99.90000% : 41704.727us 00:09:14.595 99.99000% : 42181.353us 00:09:14.595 99.99900% : 42181.353us 00:09:14.595 99.99990% : 42181.353us 00:09:14.595 99.99999% : 42181.353us 00:09:14.595 00:09:14.595 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:14.595 ================================================================================= 00:09:14.595 1.00000% : 9711.244us 00:09:14.595 10.00000% : 10187.869us 00:09:14.595 25.00000% : 10604.916us 00:09:14.595 50.00000% : 11260.276us 00:09:14.595 75.00000% : 12094.371us 00:09:14.595 90.00000% : 12749.731us 00:09:14.595 95.00000% : 13166.778us 00:09:14.595 98.00000% : 13822.138us 00:09:14.595 99.00000% : 29908.247us 00:09:14.595 99.50000% : 37653.411us 00:09:14.595 99.90000% : 39321.600us 00:09:14.595 99.99000% : 39559.913us 00:09:14.595 99.99900% : 39798.225us 00:09:14.595 99.99990% : 39798.225us 00:09:14.595 99.99999% : 39798.225us 00:09:14.595 00:09:14.595 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:14.595 ================================================================================= 00:09:14.595 1.00000% : 9651.665us 00:09:14.595 10.00000% : 10187.869us 00:09:14.595 25.00000% : 10604.916us 00:09:14.595 50.00000% : 11260.276us 00:09:14.595 75.00000% : 12034.793us 00:09:14.595 90.00000% : 12690.153us 00:09:14.595 95.00000% : 13226.356us 00:09:14.595 98.00000% : 14358.342us 00:09:14.595 99.00000% : 28001.745us 00:09:14.595 99.50000% : 35746.909us 00:09:14.595 99.90000% : 37415.098us 00:09:14.595 99.99000% : 37891.724us 00:09:14.595 99.99900% : 37891.724us 00:09:14.595 99.99990% : 37891.724us 00:09:14.595 99.99999% : 37891.724us 00:09:14.595 00:09:14.595 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:14.595 ================================================================================= 00:09:14.595 1.00000% : 9711.244us 00:09:14.595 10.00000% : 10187.869us 00:09:14.595 25.00000% : 10604.916us 00:09:14.595 50.00000% : 11260.276us 00:09:14.595 75.00000% : 12034.793us 00:09:14.595 90.00000% : 12690.153us 00:09:14.595 95.00000% : 13107.200us 00:09:14.595 98.00000% : 14120.029us 00:09:14.595 99.00000% : 25499.462us 00:09:14.595 99.50000% : 33125.469us 00:09:14.595 99.90000% : 35031.971us 00:09:14.595 99.99000% : 35270.284us 00:09:14.595 99.99900% : 35508.596us 00:09:14.595 99.99990% : 35508.596us 00:09:14.595 99.99999% : 35508.596us 00:09:14.595 00:09:14.595 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:14.595 ================================================================================= 00:09:14.595 1.00000% : 9711.244us 00:09:14.595 10.00000% : 10187.869us 00:09:14.595 25.00000% : 10604.916us 00:09:14.595 50.00000% : 11260.276us 00:09:14.595 75.00000% : 12034.793us 00:09:14.595 90.00000% : 12690.153us 00:09:14.595 95.00000% : 13107.200us 00:09:14.595 98.00000% : 13941.295us 00:09:14.595 99.00000% : 23116.335us 00:09:14.595 99.50000% : 30742.342us 00:09:14.595 99.90000% : 32648.844us 00:09:14.595 99.99000% : 32887.156us 00:09:14.595 99.99900% : 33125.469us 00:09:14.595 99.99990% : 33125.469us 00:09:14.595 99.99999% : 33125.469us 00:09:14.595 00:09:14.595 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:14.595 ================================================================================= 00:09:14.595 1.00000% : 9711.244us 00:09:14.595 10.00000% : 10187.869us 00:09:14.595 25.00000% : 10604.916us 00:09:14.595 50.00000% : 11260.276us 00:09:14.595 75.00000% : 12094.371us 00:09:14.595 90.00000% : 12690.153us 00:09:14.595 95.00000% : 13166.778us 00:09:14.595 98.00000% : 13941.295us 00:09:14.595 99.00000% : 20971.520us 00:09:14.595 99.50000% : 26810.182us 00:09:14.595 99.90000% : 30146.560us 00:09:14.595 99.99000% : 30504.029us 00:09:14.595 99.99900% : 30504.029us 00:09:14.595 99.99990% : 30504.029us 00:09:14.595 99.99999% : 30504.029us 00:09:14.595 00:09:14.595 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:09:14.595 ============================================================================== 00:09:14.595 Range in us Cumulative IO count 00:09:14.595 9234.618 - 9294.196: 0.0542% ( 6) 00:09:14.595 9294.196 - 9353.775: 0.1355% ( 9) 00:09:14.595 9353.775 - 9413.353: 0.2439% ( 12) 00:09:14.595 9413.353 - 9472.931: 0.4967% ( 28) 00:09:14.595 9472.931 - 9532.509: 0.8851% ( 43) 00:09:14.595 9532.509 - 9592.087: 1.1561% ( 30) 00:09:14.595 9592.087 - 9651.665: 1.6709% ( 57) 00:09:14.595 9651.665 - 9711.244: 2.2850% ( 68) 00:09:14.595 9711.244 - 9770.822: 3.2605% ( 108) 00:09:14.595 9770.822 - 9830.400: 4.4075% ( 127) 00:09:14.595 9830.400 - 9889.978: 5.6539% ( 138) 00:09:14.595 9889.978 - 9949.556: 7.0087% ( 150) 00:09:14.595 9949.556 - 10009.135: 8.4628% ( 161) 00:09:14.595 10009.135 - 10068.713: 9.7905% ( 147) 00:09:14.595 10068.713 - 10128.291: 11.3710% ( 175) 00:09:14.595 10128.291 - 10187.869: 12.9697% ( 177) 00:09:14.595 10187.869 - 10247.447: 14.6044% ( 181) 00:09:14.596 10247.447 - 10307.025: 16.4198% ( 201) 00:09:14.596 10307.025 - 10366.604: 18.2713% ( 205) 00:09:14.596 10366.604 - 10426.182: 20.1680% ( 210) 00:09:14.596 10426.182 - 10485.760: 22.1911% ( 224) 00:09:14.596 10485.760 - 10545.338: 24.2142% ( 224) 00:09:14.596 10545.338 - 10604.916: 26.2193% ( 222) 00:09:14.596 10604.916 - 10664.495: 28.3960% ( 241) 00:09:14.596 10664.495 - 10724.073: 30.6900% ( 254) 00:09:14.596 10724.073 - 10783.651: 32.7402% ( 227) 00:09:14.596 10783.651 - 10843.229: 35.1337% ( 265) 00:09:14.596 10843.229 - 10902.807: 37.4187% ( 253) 00:09:14.596 10902.807 - 10962.385: 39.8934% ( 274) 00:09:14.596 10962.385 - 11021.964: 42.2868% ( 265) 00:09:14.596 11021.964 - 11081.542: 44.7345% ( 271) 00:09:14.596 11081.542 - 11141.120: 47.0285% ( 254) 00:09:14.596 11141.120 - 11200.698: 48.8981% ( 207) 00:09:14.596 11200.698 - 11260.276: 50.8129% ( 212) 00:09:14.596 11260.276 - 11319.855: 52.5289% ( 190) 00:09:14.596 11319.855 - 11379.433: 54.4617% ( 214) 00:09:14.596 11379.433 - 11439.011: 56.2681% ( 200) 00:09:14.596 11439.011 - 11498.589: 58.1286% ( 206) 00:09:14.596 11498.589 - 11558.167: 59.8537% ( 191) 00:09:14.596 11558.167 - 11617.745: 61.6691% ( 201) 00:09:14.596 11617.745 - 11677.324: 63.4122% ( 193) 00:09:14.596 11677.324 - 11736.902: 65.2999% ( 209) 00:09:14.596 11736.902 - 11796.480: 67.0972% ( 199) 00:09:14.596 11796.480 - 11856.058: 68.7410% ( 182) 00:09:14.596 11856.058 - 11915.636: 70.4389% ( 188) 00:09:14.596 11915.636 - 11975.215: 72.1550% ( 190) 00:09:14.596 11975.215 - 12034.793: 73.9704% ( 201) 00:09:14.596 12034.793 - 12094.371: 75.7135% ( 193) 00:09:14.596 12094.371 - 12153.949: 77.3121% ( 177) 00:09:14.596 12153.949 - 12213.527: 78.9379% ( 180) 00:09:14.596 12213.527 - 12273.105: 80.4281% ( 165) 00:09:14.596 12273.105 - 12332.684: 81.9274% ( 166) 00:09:14.596 12332.684 - 12392.262: 83.3363% ( 156) 00:09:14.596 12392.262 - 12451.840: 84.5647% ( 136) 00:09:14.596 12451.840 - 12511.418: 85.6846% ( 124) 00:09:14.596 12511.418 - 12570.996: 86.7413% ( 117) 00:09:14.596 12570.996 - 12630.575: 87.7439% ( 111) 00:09:14.596 12630.575 - 12690.153: 88.6651% ( 102) 00:09:14.596 12690.153 - 12749.731: 89.7218% ( 117) 00:09:14.596 12749.731 - 12809.309: 90.6702% ( 105) 00:09:14.596 12809.309 - 12868.887: 91.4288% ( 84) 00:09:14.596 12868.887 - 12928.465: 92.2959% ( 96) 00:09:14.596 12928.465 - 12988.044: 92.9371% ( 71) 00:09:14.596 12988.044 - 13047.622: 93.5423% ( 67) 00:09:14.596 13047.622 - 13107.200: 94.0842% ( 60) 00:09:14.596 13107.200 - 13166.778: 94.7435% ( 73) 00:09:14.596 13166.778 - 13226.356: 95.3215% ( 64) 00:09:14.596 13226.356 - 13285.935: 95.8092% ( 54) 00:09:14.596 13285.935 - 13345.513: 96.2428% ( 48) 00:09:14.596 13345.513 - 13405.091: 96.5228% ( 31) 00:09:14.596 13405.091 - 13464.669: 96.7576% ( 26) 00:09:14.596 13464.669 - 13524.247: 96.9111% ( 17) 00:09:14.596 13524.247 - 13583.825: 97.0737% ( 18) 00:09:14.596 13583.825 - 13643.404: 97.2272% ( 17) 00:09:14.596 13643.404 - 13702.982: 97.3627% ( 15) 00:09:14.596 13702.982 - 13762.560: 97.4982% ( 15) 00:09:14.596 13762.560 - 13822.138: 97.6969% ( 22) 00:09:14.596 13822.138 - 13881.716: 97.8143% ( 13) 00:09:14.596 13881.716 - 13941.295: 97.9588% ( 16) 00:09:14.596 13941.295 - 14000.873: 98.1033% ( 16) 00:09:14.596 14000.873 - 14060.451: 98.2569% ( 17) 00:09:14.596 14060.451 - 14120.029: 98.3833% ( 14) 00:09:14.596 14120.029 - 14179.607: 98.4917% ( 12) 00:09:14.596 14179.607 - 14239.185: 98.5459% ( 6) 00:09:14.596 14239.185 - 14298.764: 98.5910% ( 5) 00:09:14.596 14298.764 - 14358.342: 98.6633% ( 8) 00:09:14.596 14358.342 - 14417.920: 98.6814% ( 2) 00:09:14.596 14417.920 - 14477.498: 98.7446% ( 7) 00:09:14.596 14477.498 - 14537.076: 98.7988% ( 6) 00:09:14.596 14537.076 - 14596.655: 98.8259% ( 3) 00:09:14.596 14596.655 - 14656.233: 98.8439% ( 2) 00:09:14.596 30504.029 - 30742.342: 98.8620% ( 2) 00:09:14.596 30742.342 - 30980.655: 98.9162% ( 6) 00:09:14.596 30980.655 - 31218.967: 98.9704% ( 6) 00:09:14.596 31218.967 - 31457.280: 99.0246% ( 6) 00:09:14.596 31457.280 - 31695.593: 99.0788% ( 6) 00:09:14.596 31695.593 - 31933.905: 99.1329% ( 6) 00:09:14.596 31933.905 - 32172.218: 99.1962% ( 7) 00:09:14.596 32172.218 - 32410.531: 99.2504% ( 6) 00:09:14.596 32410.531 - 32648.844: 99.3046% ( 6) 00:09:14.596 32648.844 - 32887.156: 99.3497% ( 5) 00:09:14.596 32887.156 - 33125.469: 99.4039% ( 6) 00:09:14.596 33125.469 - 33363.782: 99.4220% ( 2) 00:09:14.596 39083.287 - 39321.600: 99.4491% ( 3) 00:09:14.596 39321.600 - 39559.913: 99.4852% ( 4) 00:09:14.596 39559.913 - 39798.225: 99.5303% ( 5) 00:09:14.596 39798.225 - 40036.538: 99.5845% ( 6) 00:09:14.596 40036.538 - 40274.851: 99.6297% ( 5) 00:09:14.596 40274.851 - 40513.164: 99.6749% ( 5) 00:09:14.596 40513.164 - 40751.476: 99.7200% ( 5) 00:09:14.596 40751.476 - 40989.789: 99.7742% ( 6) 00:09:14.596 40989.789 - 41228.102: 99.8284% ( 6) 00:09:14.596 41228.102 - 41466.415: 99.8826% ( 6) 00:09:14.596 41466.415 - 41704.727: 99.9368% ( 6) 00:09:14.596 41704.727 - 41943.040: 99.9729% ( 4) 00:09:14.596 41943.040 - 42181.353: 100.0000% ( 3) 00:09:14.596 00:09:14.596 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:09:14.596 ============================================================================== 00:09:14.596 Range in us Cumulative IO count 00:09:14.596 9353.775 - 9413.353: 0.0271% ( 3) 00:09:14.596 9413.353 - 9472.931: 0.2168% ( 21) 00:09:14.596 9472.931 - 9532.509: 0.3342% ( 13) 00:09:14.596 9532.509 - 9592.087: 0.4697% ( 15) 00:09:14.596 9592.087 - 9651.665: 0.7948% ( 36) 00:09:14.596 9651.665 - 9711.244: 1.2283% ( 48) 00:09:14.596 9711.244 - 9770.822: 1.9328% ( 78) 00:09:14.596 9770.822 - 9830.400: 2.6824% ( 83) 00:09:14.596 9830.400 - 9889.978: 3.7030% ( 113) 00:09:14.596 9889.978 - 9949.556: 4.8410% ( 126) 00:09:14.596 9949.556 - 10009.135: 6.3584% ( 168) 00:09:14.596 10009.135 - 10068.713: 7.8757% ( 168) 00:09:14.596 10068.713 - 10128.291: 9.9169% ( 226) 00:09:14.596 10128.291 - 10187.869: 11.8046% ( 209) 00:09:14.596 10187.869 - 10247.447: 13.8006% ( 221) 00:09:14.596 10247.447 - 10307.025: 15.7424% ( 215) 00:09:14.596 10307.025 - 10366.604: 17.6301% ( 209) 00:09:14.596 10366.604 - 10426.182: 19.7074% ( 230) 00:09:14.596 10426.182 - 10485.760: 21.9563% ( 249) 00:09:14.596 10485.760 - 10545.338: 24.3316% ( 263) 00:09:14.596 10545.338 - 10604.916: 26.7251% ( 265) 00:09:14.596 10604.916 - 10664.495: 29.1275% ( 266) 00:09:14.596 10664.495 - 10724.073: 31.4668% ( 259) 00:09:14.596 10724.073 - 10783.651: 33.8421% ( 263) 00:09:14.596 10783.651 - 10843.229: 36.1904% ( 260) 00:09:14.596 10843.229 - 10902.807: 38.3129% ( 235) 00:09:14.596 10902.807 - 10962.385: 40.2818% ( 218) 00:09:14.596 10962.385 - 11021.964: 42.3952% ( 234) 00:09:14.596 11021.964 - 11081.542: 44.3913% ( 221) 00:09:14.596 11081.542 - 11141.120: 46.4144% ( 224) 00:09:14.596 11141.120 - 11200.698: 48.2840% ( 207) 00:09:14.596 11200.698 - 11260.276: 50.2710% ( 220) 00:09:14.596 11260.276 - 11319.855: 52.4025% ( 236) 00:09:14.596 11319.855 - 11379.433: 54.1456% ( 193) 00:09:14.596 11379.433 - 11439.011: 55.9520% ( 200) 00:09:14.596 11439.011 - 11498.589: 57.8125% ( 206) 00:09:14.596 11498.589 - 11558.167: 59.8447% ( 225) 00:09:14.596 11558.167 - 11617.745: 61.6781% ( 203) 00:09:14.596 11617.745 - 11677.324: 63.4664% ( 198) 00:09:14.596 11677.324 - 11736.902: 65.3811% ( 212) 00:09:14.596 11736.902 - 11796.480: 67.2688% ( 209) 00:09:14.596 11796.480 - 11856.058: 69.1293% ( 206) 00:09:14.596 11856.058 - 11915.636: 70.8815% ( 194) 00:09:14.596 11915.636 - 11975.215: 72.7330% ( 205) 00:09:14.596 11975.215 - 12034.793: 74.5123% ( 197) 00:09:14.596 12034.793 - 12094.371: 76.4722% ( 217) 00:09:14.596 12094.371 - 12153.949: 78.1792% ( 189) 00:09:14.596 12153.949 - 12213.527: 79.8410% ( 184) 00:09:14.596 12213.527 - 12273.105: 81.2861% ( 160) 00:09:14.596 12273.105 - 12332.684: 82.6499% ( 151) 00:09:14.596 12332.684 - 12392.262: 83.9595% ( 145) 00:09:14.596 12392.262 - 12451.840: 85.3233% ( 151) 00:09:14.596 12451.840 - 12511.418: 86.5336% ( 134) 00:09:14.596 12511.418 - 12570.996: 87.7529% ( 135) 00:09:14.596 12570.996 - 12630.575: 88.9812% ( 136) 00:09:14.596 12630.575 - 12690.153: 89.9837% ( 111) 00:09:14.596 12690.153 - 12749.731: 90.8960% ( 101) 00:09:14.596 12749.731 - 12809.309: 91.7449% ( 94) 00:09:14.596 12809.309 - 12868.887: 92.4855% ( 82) 00:09:14.596 12868.887 - 12928.465: 93.1629% ( 75) 00:09:14.596 12928.465 - 12988.044: 93.9035% ( 82) 00:09:14.596 12988.044 - 13047.622: 94.4274% ( 58) 00:09:14.596 13047.622 - 13107.200: 94.9151% ( 54) 00:09:14.596 13107.200 - 13166.778: 95.3577% ( 49) 00:09:14.596 13166.778 - 13226.356: 95.7099% ( 39) 00:09:14.596 13226.356 - 13285.935: 96.0531% ( 38) 00:09:14.596 13285.935 - 13345.513: 96.3692% ( 35) 00:09:14.596 13345.513 - 13405.091: 96.7305% ( 40) 00:09:14.596 13405.091 - 13464.669: 96.9924% ( 29) 00:09:14.596 13464.669 - 13524.247: 97.2092% ( 24) 00:09:14.596 13524.247 - 13583.825: 97.3898% ( 20) 00:09:14.596 13583.825 - 13643.404: 97.5704% ( 20) 00:09:14.596 13643.404 - 13702.982: 97.7511% ( 20) 00:09:14.596 13702.982 - 13762.560: 97.9137% ( 18) 00:09:14.596 13762.560 - 13822.138: 98.0311% ( 13) 00:09:14.596 13822.138 - 13881.716: 98.1485% ( 13) 00:09:14.596 13881.716 - 13941.295: 98.2749% ( 14) 00:09:14.596 13941.295 - 14000.873: 98.3562% ( 9) 00:09:14.596 14000.873 - 14060.451: 98.4194% ( 7) 00:09:14.596 14060.451 - 14120.029: 98.4736% ( 6) 00:09:14.596 14120.029 - 14179.607: 98.5278% ( 6) 00:09:14.596 14179.607 - 14239.185: 98.5910% ( 7) 00:09:14.596 14239.185 - 14298.764: 98.6452% ( 6) 00:09:14.596 14298.764 - 14358.342: 98.6994% ( 6) 00:09:14.596 14358.342 - 14417.920: 98.7446% ( 5) 00:09:14.596 14417.920 - 14477.498: 98.7626% ( 2) 00:09:14.596 14477.498 - 14537.076: 98.7988% ( 4) 00:09:14.596 14537.076 - 14596.655: 98.8259% ( 3) 00:09:14.596 14596.655 - 14656.233: 98.8439% ( 2) 00:09:14.596 29193.309 - 29312.465: 98.8710% ( 3) 00:09:14.596 29312.465 - 29431.622: 98.8981% ( 3) 00:09:14.597 29431.622 - 29550.778: 98.9342% ( 4) 00:09:14.597 29550.778 - 29669.935: 98.9613% ( 3) 00:09:14.597 29669.935 - 29789.091: 98.9884% ( 3) 00:09:14.597 29789.091 - 29908.247: 99.0155% ( 3) 00:09:14.597 29908.247 - 30027.404: 99.0517% ( 4) 00:09:14.597 30027.404 - 30146.560: 99.0788% ( 3) 00:09:14.597 30146.560 - 30265.716: 99.1059% ( 3) 00:09:14.597 30265.716 - 30384.873: 99.1329% ( 3) 00:09:14.597 30384.873 - 30504.029: 99.1691% ( 4) 00:09:14.597 30504.029 - 30742.342: 99.2233% ( 6) 00:09:14.597 30742.342 - 30980.655: 99.2865% ( 7) 00:09:14.597 30980.655 - 31218.967: 99.3497% ( 7) 00:09:14.597 31218.967 - 31457.280: 99.4039% ( 6) 00:09:14.597 31457.280 - 31695.593: 99.4220% ( 2) 00:09:14.597 36938.473 - 37176.785: 99.4400% ( 2) 00:09:14.597 37176.785 - 37415.098: 99.4942% ( 6) 00:09:14.597 37415.098 - 37653.411: 99.5484% ( 6) 00:09:14.597 37653.411 - 37891.724: 99.6026% ( 6) 00:09:14.597 37891.724 - 38130.036: 99.6568% ( 6) 00:09:14.597 38130.036 - 38368.349: 99.7110% ( 6) 00:09:14.597 38368.349 - 38606.662: 99.7652% ( 6) 00:09:14.597 38606.662 - 38844.975: 99.8194% ( 6) 00:09:14.597 38844.975 - 39083.287: 99.8736% ( 6) 00:09:14.597 39083.287 - 39321.600: 99.9277% ( 6) 00:09:14.597 39321.600 - 39559.913: 99.9910% ( 7) 00:09:14.597 39559.913 - 39798.225: 100.0000% ( 1) 00:09:14.597 00:09:14.597 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:09:14.597 ============================================================================== 00:09:14.597 Range in us Cumulative IO count 00:09:14.597 9294.196 - 9353.775: 0.0181% ( 2) 00:09:14.597 9353.775 - 9413.353: 0.1084% ( 10) 00:09:14.597 9413.353 - 9472.931: 0.2529% ( 16) 00:09:14.597 9472.931 - 9532.509: 0.4697% ( 24) 00:09:14.597 9532.509 - 9592.087: 0.7767% ( 34) 00:09:14.597 9592.087 - 9651.665: 1.0116% ( 26) 00:09:14.597 9651.665 - 9711.244: 1.3819% ( 41) 00:09:14.597 9711.244 - 9770.822: 1.9599% ( 64) 00:09:14.597 9770.822 - 9830.400: 2.7366% ( 86) 00:09:14.597 9830.400 - 9889.978: 3.8114% ( 119) 00:09:14.597 9889.978 - 9949.556: 4.9946% ( 131) 00:09:14.597 9949.556 - 10009.135: 6.5300% ( 170) 00:09:14.597 10009.135 - 10068.713: 8.1647% ( 181) 00:09:14.597 10068.713 - 10128.291: 9.8085% ( 182) 00:09:14.597 10128.291 - 10187.869: 11.4613% ( 183) 00:09:14.597 10187.869 - 10247.447: 13.4393% ( 219) 00:09:14.597 10247.447 - 10307.025: 15.3721% ( 214) 00:09:14.597 10307.025 - 10366.604: 17.5668% ( 243) 00:09:14.597 10366.604 - 10426.182: 19.8880% ( 257) 00:09:14.597 10426.182 - 10485.760: 22.2272% ( 259) 00:09:14.597 10485.760 - 10545.338: 24.4762% ( 249) 00:09:14.597 10545.338 - 10604.916: 26.6889% ( 245) 00:09:14.597 10604.916 - 10664.495: 29.0462% ( 261) 00:09:14.597 10664.495 - 10724.073: 31.4126% ( 262) 00:09:14.597 10724.073 - 10783.651: 33.6525% ( 248) 00:09:14.597 10783.651 - 10843.229: 35.9465% ( 254) 00:09:14.597 10843.229 - 10902.807: 38.1503% ( 244) 00:09:14.597 10902.807 - 10962.385: 40.2637% ( 234) 00:09:14.597 10962.385 - 11021.964: 42.3862% ( 235) 00:09:14.597 11021.964 - 11081.542: 44.5448% ( 239) 00:09:14.597 11081.542 - 11141.120: 46.5589% ( 223) 00:09:14.597 11141.120 - 11200.698: 48.6181% ( 228) 00:09:14.597 11200.698 - 11260.276: 50.9122% ( 254) 00:09:14.597 11260.276 - 11319.855: 53.1160% ( 244) 00:09:14.597 11319.855 - 11379.433: 55.3017% ( 242) 00:09:14.597 11379.433 - 11439.011: 57.1893% ( 209) 00:09:14.597 11439.011 - 11498.589: 59.1763% ( 220) 00:09:14.597 11498.589 - 11558.167: 61.0820% ( 211) 00:09:14.597 11558.167 - 11617.745: 63.0780% ( 221) 00:09:14.597 11617.745 - 11677.324: 64.9296% ( 205) 00:09:14.597 11677.324 - 11736.902: 66.6546% ( 191) 00:09:14.597 11736.902 - 11796.480: 68.5603% ( 211) 00:09:14.597 11796.480 - 11856.058: 70.3667% ( 200) 00:09:14.597 11856.058 - 11915.636: 72.1460% ( 197) 00:09:14.597 11915.636 - 11975.215: 74.1149% ( 218) 00:09:14.597 11975.215 - 12034.793: 75.8038% ( 187) 00:09:14.597 12034.793 - 12094.371: 77.6373% ( 203) 00:09:14.597 12094.371 - 12153.949: 79.1456% ( 167) 00:09:14.597 12153.949 - 12213.527: 80.5997% ( 161) 00:09:14.597 12213.527 - 12273.105: 82.1351% ( 170) 00:09:14.597 12273.105 - 12332.684: 83.5983% ( 162) 00:09:14.597 12332.684 - 12392.262: 84.8537% ( 139) 00:09:14.597 12392.262 - 12451.840: 86.0459% ( 132) 00:09:14.597 12451.840 - 12511.418: 87.1387% ( 121) 00:09:14.597 12511.418 - 12570.996: 88.2587% ( 124) 00:09:14.597 12570.996 - 12630.575: 89.2522% ( 110) 00:09:14.597 12630.575 - 12690.153: 90.2547% ( 111) 00:09:14.597 12690.153 - 12749.731: 91.0495% ( 88) 00:09:14.597 12749.731 - 12809.309: 91.7540% ( 78) 00:09:14.597 12809.309 - 12868.887: 92.4043% ( 72) 00:09:14.597 12868.887 - 12928.465: 93.0275% ( 69) 00:09:14.597 12928.465 - 12988.044: 93.5061% ( 53) 00:09:14.597 12988.044 - 13047.622: 94.0029% ( 55) 00:09:14.597 13047.622 - 13107.200: 94.4364% ( 48) 00:09:14.597 13107.200 - 13166.778: 94.8248% ( 43) 00:09:14.597 13166.778 - 13226.356: 95.1861% ( 40) 00:09:14.597 13226.356 - 13285.935: 95.5112% ( 36) 00:09:14.597 13285.935 - 13345.513: 95.8815% ( 41) 00:09:14.597 13345.513 - 13405.091: 96.1796% ( 33) 00:09:14.597 13405.091 - 13464.669: 96.4324% ( 28) 00:09:14.597 13464.669 - 13524.247: 96.6673% ( 26) 00:09:14.597 13524.247 - 13583.825: 96.9202% ( 28) 00:09:14.597 13583.825 - 13643.404: 97.1550% ( 26) 00:09:14.597 13643.404 - 13702.982: 97.3447% ( 21) 00:09:14.597 13702.982 - 13762.560: 97.4621% ( 13) 00:09:14.597 13762.560 - 13822.138: 97.5704% ( 12) 00:09:14.597 13822.138 - 13881.716: 97.6608% ( 10) 00:09:14.597 13881.716 - 13941.295: 97.7059% ( 5) 00:09:14.597 13941.295 - 14000.873: 97.7511% ( 5) 00:09:14.597 14000.873 - 14060.451: 97.7872% ( 4) 00:09:14.597 14060.451 - 14120.029: 97.8324% ( 5) 00:09:14.597 14120.029 - 14179.607: 97.8866% ( 6) 00:09:14.597 14179.607 - 14239.185: 97.9408% ( 6) 00:09:14.597 14239.185 - 14298.764: 97.9949% ( 6) 00:09:14.597 14298.764 - 14358.342: 98.0401% ( 5) 00:09:14.597 14358.342 - 14417.920: 98.0762% ( 4) 00:09:14.597 14417.920 - 14477.498: 98.1756% ( 11) 00:09:14.597 14477.498 - 14537.076: 98.2298% ( 6) 00:09:14.597 14537.076 - 14596.655: 98.3111% ( 9) 00:09:14.597 14596.655 - 14656.233: 98.3833% ( 8) 00:09:14.597 14656.233 - 14715.811: 98.4465% ( 7) 00:09:14.597 14715.811 - 14775.389: 98.5007% ( 6) 00:09:14.597 14775.389 - 14834.967: 98.5549% ( 6) 00:09:14.597 14834.967 - 14894.545: 98.5820% ( 3) 00:09:14.597 14894.545 - 14954.124: 98.6181% ( 4) 00:09:14.597 14954.124 - 15013.702: 98.6452% ( 3) 00:09:14.597 15013.702 - 15073.280: 98.6723% ( 3) 00:09:14.597 15073.280 - 15132.858: 98.7085% ( 4) 00:09:14.597 15132.858 - 15192.436: 98.7265% ( 2) 00:09:14.597 15192.436 - 15252.015: 98.7626% ( 4) 00:09:14.597 15252.015 - 15371.171: 98.8168% ( 6) 00:09:14.597 15371.171 - 15490.327: 98.8439% ( 3) 00:09:14.597 27286.807 - 27405.964: 98.8620% ( 2) 00:09:14.597 27405.964 - 27525.120: 98.8891% ( 3) 00:09:14.597 27525.120 - 27644.276: 98.9162% ( 3) 00:09:14.597 27644.276 - 27763.433: 98.9523% ( 4) 00:09:14.597 27763.433 - 27882.589: 98.9794% ( 3) 00:09:14.597 27882.589 - 28001.745: 99.0155% ( 4) 00:09:14.597 28001.745 - 28120.902: 99.0426% ( 3) 00:09:14.597 28120.902 - 28240.058: 99.0697% ( 3) 00:09:14.597 28240.058 - 28359.215: 99.0968% ( 3) 00:09:14.597 28359.215 - 28478.371: 99.1329% ( 4) 00:09:14.597 28478.371 - 28597.527: 99.1600% ( 3) 00:09:14.597 28597.527 - 28716.684: 99.1871% ( 3) 00:09:14.597 28716.684 - 28835.840: 99.2142% ( 3) 00:09:14.597 28835.840 - 28954.996: 99.2504% ( 4) 00:09:14.597 28954.996 - 29074.153: 99.2775% ( 3) 00:09:14.597 29074.153 - 29193.309: 99.3046% ( 3) 00:09:14.597 29193.309 - 29312.465: 99.3316% ( 3) 00:09:14.597 29312.465 - 29431.622: 99.3587% ( 3) 00:09:14.597 29431.622 - 29550.778: 99.3858% ( 3) 00:09:14.597 29550.778 - 29669.935: 99.4129% ( 3) 00:09:14.597 29669.935 - 29789.091: 99.4220% ( 1) 00:09:14.597 35031.971 - 35270.284: 99.4400% ( 2) 00:09:14.597 35270.284 - 35508.596: 99.4852% ( 5) 00:09:14.597 35508.596 - 35746.909: 99.5484% ( 7) 00:09:14.597 35746.909 - 35985.222: 99.5936% ( 5) 00:09:14.597 35985.222 - 36223.535: 99.6478% ( 6) 00:09:14.597 36223.535 - 36461.847: 99.7020% ( 6) 00:09:14.597 36461.847 - 36700.160: 99.7471% ( 5) 00:09:14.597 36700.160 - 36938.473: 99.8013% ( 6) 00:09:14.597 36938.473 - 37176.785: 99.8555% ( 6) 00:09:14.597 37176.785 - 37415.098: 99.9097% ( 6) 00:09:14.597 37415.098 - 37653.411: 99.9729% ( 7) 00:09:14.597 37653.411 - 37891.724: 100.0000% ( 3) 00:09:14.597 00:09:14.597 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:09:14.597 ============================================================================== 00:09:14.597 Range in us Cumulative IO count 00:09:14.597 9294.196 - 9353.775: 0.0271% ( 3) 00:09:14.597 9353.775 - 9413.353: 0.0903% ( 7) 00:09:14.597 9413.353 - 9472.931: 0.1987% ( 12) 00:09:14.597 9472.931 - 9532.509: 0.3613% ( 18) 00:09:14.597 9532.509 - 9592.087: 0.5871% ( 25) 00:09:14.597 9592.087 - 9651.665: 0.9303% ( 38) 00:09:14.597 9651.665 - 9711.244: 1.3457% ( 46) 00:09:14.597 9711.244 - 9770.822: 1.8605% ( 57) 00:09:14.597 9770.822 - 9830.400: 2.5741% ( 79) 00:09:14.597 9830.400 - 9889.978: 3.5224% ( 105) 00:09:14.597 9889.978 - 9949.556: 4.6875% ( 129) 00:09:14.597 9949.556 - 10009.135: 6.1416% ( 161) 00:09:14.597 10009.135 - 10068.713: 7.6499% ( 167) 00:09:14.597 10068.713 - 10128.291: 9.1763% ( 169) 00:09:14.597 10128.291 - 10187.869: 11.1723% ( 221) 00:09:14.597 10187.869 - 10247.447: 13.2225% ( 227) 00:09:14.597 10247.447 - 10307.025: 15.3992% ( 241) 00:09:14.597 10307.025 - 10366.604: 17.7023% ( 255) 00:09:14.597 10366.604 - 10426.182: 19.9332% ( 247) 00:09:14.597 10426.182 - 10485.760: 22.0827% ( 238) 00:09:14.597 10485.760 - 10545.338: 24.1781% ( 232) 00:09:14.597 10545.338 - 10604.916: 26.3999% ( 246) 00:09:14.597 10604.916 - 10664.495: 28.6488% ( 249) 00:09:14.597 10664.495 - 10724.073: 31.0965% ( 271) 00:09:14.597 10724.073 - 10783.651: 33.4538% ( 261) 00:09:14.598 10783.651 - 10843.229: 35.8291% ( 263) 00:09:14.598 10843.229 - 10902.807: 38.0690% ( 248) 00:09:14.598 10902.807 - 10962.385: 40.1373% ( 229) 00:09:14.598 10962.385 - 11021.964: 42.1604% ( 224) 00:09:14.598 11021.964 - 11081.542: 44.3822% ( 246) 00:09:14.598 11081.542 - 11141.120: 46.5950% ( 245) 00:09:14.598 11141.120 - 11200.698: 48.8168% ( 246) 00:09:14.598 11200.698 - 11260.276: 50.9574% ( 237) 00:09:14.598 11260.276 - 11319.855: 52.8360% ( 208) 00:09:14.598 11319.855 - 11379.433: 54.7056% ( 207) 00:09:14.598 11379.433 - 11439.011: 56.5932% ( 209) 00:09:14.598 11439.011 - 11498.589: 58.5170% ( 213) 00:09:14.598 11498.589 - 11558.167: 60.2872% ( 196) 00:09:14.598 11558.167 - 11617.745: 62.2742% ( 220) 00:09:14.598 11617.745 - 11677.324: 64.0535% ( 197) 00:09:14.598 11677.324 - 11736.902: 65.8779% ( 202) 00:09:14.598 11736.902 - 11796.480: 67.8107% ( 214) 00:09:14.598 11796.480 - 11856.058: 69.6983% ( 209) 00:09:14.598 11856.058 - 11915.636: 71.4415% ( 193) 00:09:14.598 11915.636 - 11975.215: 73.2388% ( 199) 00:09:14.598 11975.215 - 12034.793: 75.1897% ( 216) 00:09:14.598 12034.793 - 12094.371: 77.2128% ( 224) 00:09:14.598 12094.371 - 12153.949: 78.9469% ( 192) 00:09:14.598 12153.949 - 12213.527: 80.6178% ( 185) 00:09:14.598 12213.527 - 12273.105: 82.1712% ( 172) 00:09:14.598 12273.105 - 12332.684: 83.6525% ( 164) 00:09:14.598 12332.684 - 12392.262: 84.9711% ( 146) 00:09:14.598 12392.262 - 12451.840: 86.1543% ( 131) 00:09:14.598 12451.840 - 12511.418: 87.2200% ( 118) 00:09:14.598 12511.418 - 12570.996: 88.3490% ( 125) 00:09:14.598 12570.996 - 12630.575: 89.4147% ( 118) 00:09:14.598 12630.575 - 12690.153: 90.3992% ( 109) 00:09:14.598 12690.153 - 12749.731: 91.2211% ( 91) 00:09:14.598 12749.731 - 12809.309: 92.0069% ( 87) 00:09:14.598 12809.309 - 12868.887: 92.7294% ( 80) 00:09:14.598 12868.887 - 12928.465: 93.3707% ( 71) 00:09:14.598 12928.465 - 12988.044: 93.9758% ( 67) 00:09:14.598 12988.044 - 13047.622: 94.5087% ( 59) 00:09:14.598 13047.622 - 13107.200: 95.0235% ( 57) 00:09:14.598 13107.200 - 13166.778: 95.4751% ( 50) 00:09:14.598 13166.778 - 13226.356: 95.8273% ( 39) 00:09:14.598 13226.356 - 13285.935: 96.1073% ( 31) 00:09:14.598 13285.935 - 13345.513: 96.3512% ( 27) 00:09:14.598 13345.513 - 13405.091: 96.5770% ( 25) 00:09:14.598 13405.091 - 13464.669: 96.8208% ( 27) 00:09:14.598 13464.669 - 13524.247: 97.0466% ( 25) 00:09:14.598 13524.247 - 13583.825: 97.2092% ( 18) 00:09:14.598 13583.825 - 13643.404: 97.4169% ( 23) 00:09:14.598 13643.404 - 13702.982: 97.5614% ( 16) 00:09:14.598 13702.982 - 13762.560: 97.6788% ( 13) 00:09:14.598 13762.560 - 13822.138: 97.7691% ( 10) 00:09:14.598 13822.138 - 13881.716: 97.8414% ( 8) 00:09:14.598 13881.716 - 13941.295: 97.9046% ( 7) 00:09:14.598 13941.295 - 14000.873: 97.9317% ( 3) 00:09:14.598 14000.873 - 14060.451: 97.9769% ( 5) 00:09:14.598 14060.451 - 14120.029: 98.0130% ( 4) 00:09:14.598 14120.029 - 14179.607: 98.0401% ( 3) 00:09:14.598 14179.607 - 14239.185: 98.0762% ( 4) 00:09:14.598 14239.185 - 14298.764: 98.1033% ( 3) 00:09:14.598 14298.764 - 14358.342: 98.1395% ( 4) 00:09:14.598 14358.342 - 14417.920: 98.1756% ( 4) 00:09:14.598 14417.920 - 14477.498: 98.2298% ( 6) 00:09:14.598 14477.498 - 14537.076: 98.3201% ( 10) 00:09:14.598 14537.076 - 14596.655: 98.4014% ( 9) 00:09:14.598 14596.655 - 14656.233: 98.4736% ( 8) 00:09:14.598 14656.233 - 14715.811: 98.5368% ( 7) 00:09:14.598 14715.811 - 14775.389: 98.5549% ( 2) 00:09:14.598 14775.389 - 14834.967: 98.5820% ( 3) 00:09:14.598 14834.967 - 14894.545: 98.6091% ( 3) 00:09:14.598 14894.545 - 14954.124: 98.6272% ( 2) 00:09:14.598 14954.124 - 15013.702: 98.6452% ( 2) 00:09:14.598 15013.702 - 15073.280: 98.6633% ( 2) 00:09:14.598 15073.280 - 15132.858: 98.6904% ( 3) 00:09:14.598 15132.858 - 15192.436: 98.7175% ( 3) 00:09:14.598 15192.436 - 15252.015: 98.7446% ( 3) 00:09:14.598 15252.015 - 15371.171: 98.8078% ( 7) 00:09:14.598 15371.171 - 15490.327: 98.8439% ( 4) 00:09:14.598 24665.367 - 24784.524: 98.8530% ( 1) 00:09:14.598 24784.524 - 24903.680: 98.8801% ( 3) 00:09:14.598 24903.680 - 25022.836: 98.9072% ( 3) 00:09:14.598 25022.836 - 25141.993: 98.9433% ( 4) 00:09:14.598 25141.993 - 25261.149: 98.9613% ( 2) 00:09:14.598 25261.149 - 25380.305: 98.9975% ( 4) 00:09:14.598 25380.305 - 25499.462: 99.0246% ( 3) 00:09:14.598 25499.462 - 25618.618: 99.0607% ( 4) 00:09:14.598 25618.618 - 25737.775: 99.0878% ( 3) 00:09:14.598 25737.775 - 25856.931: 99.1149% ( 3) 00:09:14.598 25856.931 - 25976.087: 99.1510% ( 4) 00:09:14.598 25976.087 - 26095.244: 99.1781% ( 3) 00:09:14.598 26095.244 - 26214.400: 99.2142% ( 4) 00:09:14.598 26214.400 - 26333.556: 99.2413% ( 3) 00:09:14.598 26333.556 - 26452.713: 99.2775% ( 4) 00:09:14.598 26452.713 - 26571.869: 99.3046% ( 3) 00:09:14.598 26571.869 - 26691.025: 99.3316% ( 3) 00:09:14.598 26691.025 - 26810.182: 99.3678% ( 4) 00:09:14.598 26810.182 - 26929.338: 99.3949% ( 3) 00:09:14.598 26929.338 - 27048.495: 99.4220% ( 3) 00:09:14.598 32648.844 - 32887.156: 99.4400% ( 2) 00:09:14.598 32887.156 - 33125.469: 99.5033% ( 7) 00:09:14.598 33125.469 - 33363.782: 99.5574% ( 6) 00:09:14.598 33363.782 - 33602.095: 99.6116% ( 6) 00:09:14.598 33602.095 - 33840.407: 99.6658% ( 6) 00:09:14.598 33840.407 - 34078.720: 99.7200% ( 6) 00:09:14.598 34078.720 - 34317.033: 99.7652% ( 5) 00:09:14.598 34317.033 - 34555.345: 99.8194% ( 6) 00:09:14.598 34555.345 - 34793.658: 99.8826% ( 7) 00:09:14.598 34793.658 - 35031.971: 99.9368% ( 6) 00:09:14.598 35031.971 - 35270.284: 99.9910% ( 6) 00:09:14.598 35270.284 - 35508.596: 100.0000% ( 1) 00:09:14.598 00:09:14.598 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:09:14.598 ============================================================================== 00:09:14.598 Range in us Cumulative IO count 00:09:14.598 9294.196 - 9353.775: 0.0271% ( 3) 00:09:14.598 9353.775 - 9413.353: 0.0632% ( 4) 00:09:14.598 9413.353 - 9472.931: 0.0993% ( 4) 00:09:14.598 9472.931 - 9532.509: 0.2980% ( 22) 00:09:14.598 9532.509 - 9592.087: 0.5419% ( 27) 00:09:14.598 9592.087 - 9651.665: 0.9122% ( 41) 00:09:14.598 9651.665 - 9711.244: 1.3096% ( 44) 00:09:14.598 9711.244 - 9770.822: 1.8425% ( 59) 00:09:14.598 9770.822 - 9830.400: 2.5560% ( 79) 00:09:14.598 9830.400 - 9889.978: 3.3147% ( 84) 00:09:14.598 9889.978 - 9949.556: 4.3262% ( 112) 00:09:14.598 9949.556 - 10009.135: 5.8074% ( 164) 00:09:14.598 10009.135 - 10068.713: 7.5686% ( 195) 00:09:14.598 10068.713 - 10128.291: 9.5827% ( 223) 00:09:14.598 10128.291 - 10187.869: 11.5246% ( 215) 00:09:14.598 10187.869 - 10247.447: 13.5387% ( 223) 00:09:14.598 10247.447 - 10307.025: 15.5166% ( 219) 00:09:14.598 10307.025 - 10366.604: 17.5488% ( 225) 00:09:14.598 10366.604 - 10426.182: 19.7616% ( 245) 00:09:14.598 10426.182 - 10485.760: 21.7937% ( 225) 00:09:14.598 10485.760 - 10545.338: 23.8259% ( 225) 00:09:14.598 10545.338 - 10604.916: 25.9574% ( 236) 00:09:14.598 10604.916 - 10664.495: 28.1702% ( 245) 00:09:14.598 10664.495 - 10724.073: 30.4371% ( 251) 00:09:14.598 10724.073 - 10783.651: 32.6138% ( 241) 00:09:14.598 10783.651 - 10843.229: 34.8988% ( 253) 00:09:14.598 10843.229 - 10902.807: 37.3736% ( 274) 00:09:14.598 10902.807 - 10962.385: 39.4960% ( 235) 00:09:14.598 10962.385 - 11021.964: 41.5914% ( 232) 00:09:14.598 11021.964 - 11081.542: 43.6507% ( 228) 00:09:14.598 11081.542 - 11141.120: 45.8815% ( 247) 00:09:14.598 11141.120 - 11200.698: 48.1846% ( 255) 00:09:14.598 11200.698 - 11260.276: 50.5600% ( 263) 00:09:14.598 11260.276 - 11319.855: 52.6644% ( 233) 00:09:14.598 11319.855 - 11379.433: 54.6514% ( 220) 00:09:14.598 11379.433 - 11439.011: 56.5390% ( 209) 00:09:14.598 11439.011 - 11498.589: 58.2280% ( 187) 00:09:14.598 11498.589 - 11558.167: 59.9892% ( 195) 00:09:14.598 11558.167 - 11617.745: 61.7142% ( 191) 00:09:14.598 11617.745 - 11677.324: 63.4574% ( 193) 00:09:14.598 11677.324 - 11736.902: 65.1824% ( 191) 00:09:14.598 11736.902 - 11796.480: 67.0611% ( 208) 00:09:14.598 11796.480 - 11856.058: 69.2287% ( 240) 00:09:14.598 11856.058 - 11915.636: 71.4053% ( 241) 00:09:14.598 11915.636 - 11975.215: 73.3652% ( 217) 00:09:14.598 11975.215 - 12034.793: 75.3161% ( 216) 00:09:14.598 12034.793 - 12094.371: 77.1586% ( 204) 00:09:14.598 12094.371 - 12153.949: 79.0282% ( 207) 00:09:14.598 12153.949 - 12213.527: 80.7171% ( 187) 00:09:14.598 12213.527 - 12273.105: 82.1441% ( 158) 00:09:14.598 12273.105 - 12332.684: 83.5260% ( 153) 00:09:14.598 12332.684 - 12392.262: 84.9621% ( 159) 00:09:14.598 12392.262 - 12451.840: 86.2988% ( 148) 00:09:14.598 12451.840 - 12511.418: 87.4729% ( 130) 00:09:14.598 12511.418 - 12570.996: 88.4845% ( 112) 00:09:14.598 12570.996 - 12630.575: 89.5773% ( 121) 00:09:14.598 12630.575 - 12690.153: 90.4805% ( 100) 00:09:14.598 12690.153 - 12749.731: 91.3927% ( 101) 00:09:14.598 12749.731 - 12809.309: 92.1423% ( 83) 00:09:14.598 12809.309 - 12868.887: 92.8829% ( 82) 00:09:14.598 12868.887 - 12928.465: 93.7229% ( 93) 00:09:14.598 12928.465 - 12988.044: 94.3913% ( 74) 00:09:14.598 12988.044 - 13047.622: 94.9332% ( 60) 00:09:14.598 13047.622 - 13107.200: 95.3306% ( 44) 00:09:14.598 13107.200 - 13166.778: 95.6467% ( 35) 00:09:14.598 13166.778 - 13226.356: 95.9989% ( 39) 00:09:14.598 13226.356 - 13285.935: 96.2428% ( 27) 00:09:14.598 13285.935 - 13345.513: 96.5318% ( 32) 00:09:14.598 13345.513 - 13405.091: 96.7847% ( 28) 00:09:14.598 13405.091 - 13464.669: 96.9834% ( 22) 00:09:14.598 13464.669 - 13524.247: 97.1460% ( 18) 00:09:14.598 13524.247 - 13583.825: 97.2814% ( 15) 00:09:14.598 13583.825 - 13643.404: 97.4079% ( 14) 00:09:14.598 13643.404 - 13702.982: 97.5253% ( 13) 00:09:14.598 13702.982 - 13762.560: 97.6788% ( 17) 00:09:14.598 13762.560 - 13822.138: 97.8414% ( 18) 00:09:14.598 13822.138 - 13881.716: 97.9769% ( 15) 00:09:14.598 13881.716 - 13941.295: 98.1214% ( 16) 00:09:14.598 13941.295 - 14000.873: 98.2207% ( 11) 00:09:14.598 14000.873 - 14060.451: 98.2930% ( 8) 00:09:14.598 14060.451 - 14120.029: 98.3833% ( 10) 00:09:14.598 14120.029 - 14179.607: 98.4736% ( 10) 00:09:14.598 14179.607 - 14239.185: 98.5368% ( 7) 00:09:14.599 14239.185 - 14298.764: 98.5820% ( 5) 00:09:14.599 14298.764 - 14358.342: 98.6362% ( 6) 00:09:14.599 14358.342 - 14417.920: 98.6814% ( 5) 00:09:14.599 14417.920 - 14477.498: 98.7265% ( 5) 00:09:14.599 14477.498 - 14537.076: 98.7536% ( 3) 00:09:14.599 14537.076 - 14596.655: 98.7717% ( 2) 00:09:14.599 14596.655 - 14656.233: 98.7988% ( 3) 00:09:14.599 14656.233 - 14715.811: 98.8259% ( 3) 00:09:14.599 14715.811 - 14775.389: 98.8439% ( 2) 00:09:14.599 22401.396 - 22520.553: 98.8710% ( 3) 00:09:14.599 22520.553 - 22639.709: 98.8981% ( 3) 00:09:14.599 22639.709 - 22758.865: 98.9252% ( 3) 00:09:14.599 22758.865 - 22878.022: 98.9523% ( 3) 00:09:14.599 22878.022 - 22997.178: 98.9884% ( 4) 00:09:14.599 22997.178 - 23116.335: 99.0155% ( 3) 00:09:14.599 23116.335 - 23235.491: 99.0426% ( 3) 00:09:14.599 23235.491 - 23354.647: 99.0697% ( 3) 00:09:14.599 23354.647 - 23473.804: 99.0968% ( 3) 00:09:14.599 23473.804 - 23592.960: 99.1239% ( 3) 00:09:14.599 23592.960 - 23712.116: 99.1510% ( 3) 00:09:14.599 23712.116 - 23831.273: 99.1781% ( 3) 00:09:14.599 23831.273 - 23950.429: 99.2052% ( 3) 00:09:14.599 23950.429 - 24069.585: 99.2413% ( 4) 00:09:14.599 24069.585 - 24188.742: 99.2684% ( 3) 00:09:14.599 24188.742 - 24307.898: 99.2955% ( 3) 00:09:14.599 24307.898 - 24427.055: 99.3226% ( 3) 00:09:14.599 24427.055 - 24546.211: 99.3587% ( 4) 00:09:14.599 24546.211 - 24665.367: 99.3858% ( 3) 00:09:14.599 24665.367 - 24784.524: 99.4129% ( 3) 00:09:14.599 24784.524 - 24903.680: 99.4220% ( 1) 00:09:14.599 30265.716 - 30384.873: 99.4310% ( 1) 00:09:14.599 30384.873 - 30504.029: 99.4491% ( 2) 00:09:14.599 30504.029 - 30742.342: 99.5123% ( 7) 00:09:14.599 30742.342 - 30980.655: 99.5665% ( 6) 00:09:14.599 30980.655 - 31218.967: 99.6116% ( 5) 00:09:14.599 31218.967 - 31457.280: 99.6658% ( 6) 00:09:14.599 31457.280 - 31695.593: 99.7200% ( 6) 00:09:14.599 31695.593 - 31933.905: 99.7742% ( 6) 00:09:14.599 31933.905 - 32172.218: 99.8194% ( 5) 00:09:14.599 32172.218 - 32410.531: 99.8736% ( 6) 00:09:14.599 32410.531 - 32648.844: 99.9277% ( 6) 00:09:14.599 32648.844 - 32887.156: 99.9910% ( 7) 00:09:14.599 32887.156 - 33125.469: 100.0000% ( 1) 00:09:14.599 00:09:14.599 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:09:14.599 ============================================================================== 00:09:14.599 Range in us Cumulative IO count 00:09:14.599 9294.196 - 9353.775: 0.0181% ( 2) 00:09:14.599 9353.775 - 9413.353: 0.0452% ( 3) 00:09:14.599 9413.353 - 9472.931: 0.1264% ( 9) 00:09:14.599 9472.931 - 9532.509: 0.3342% ( 23) 00:09:14.599 9532.509 - 9592.087: 0.4155% ( 9) 00:09:14.599 9592.087 - 9651.665: 0.7677% ( 39) 00:09:14.599 9651.665 - 9711.244: 1.2012% ( 48) 00:09:14.599 9711.244 - 9770.822: 1.8064% ( 67) 00:09:14.599 9770.822 - 9830.400: 2.4928% ( 76) 00:09:14.599 9830.400 - 9889.978: 3.6398% ( 127) 00:09:14.599 9889.978 - 9949.556: 4.8049% ( 129) 00:09:14.599 9949.556 - 10009.135: 6.0332% ( 136) 00:09:14.599 10009.135 - 10068.713: 7.5235% ( 165) 00:09:14.599 10068.713 - 10128.291: 9.2757% ( 194) 00:09:14.599 10128.291 - 10187.869: 11.1633% ( 209) 00:09:14.599 10187.869 - 10247.447: 13.3038% ( 237) 00:09:14.599 10247.447 - 10307.025: 15.4082% ( 233) 00:09:14.599 10307.025 - 10366.604: 17.6572% ( 249) 00:09:14.599 10366.604 - 10426.182: 19.9061% ( 249) 00:09:14.599 10426.182 - 10485.760: 22.0827% ( 241) 00:09:14.599 10485.760 - 10545.338: 24.2142% ( 236) 00:09:14.599 10545.338 - 10604.916: 26.3006% ( 231) 00:09:14.599 10604.916 - 10664.495: 28.6308% ( 258) 00:09:14.599 10664.495 - 10724.073: 30.9429% ( 256) 00:09:14.599 10724.073 - 10783.651: 33.3363% ( 265) 00:09:14.599 10783.651 - 10843.229: 35.7298% ( 265) 00:09:14.599 10843.229 - 10902.807: 38.0509% ( 257) 00:09:14.599 10902.807 - 10962.385: 40.2999% ( 249) 00:09:14.599 10962.385 - 11021.964: 42.4314% ( 236) 00:09:14.599 11021.964 - 11081.542: 44.6261% ( 243) 00:09:14.599 11081.542 - 11141.120: 46.5770% ( 216) 00:09:14.599 11141.120 - 11200.698: 48.5820% ( 222) 00:09:14.599 11200.698 - 11260.276: 50.5961% ( 223) 00:09:14.599 11260.276 - 11319.855: 52.4928% ( 210) 00:09:14.599 11319.855 - 11379.433: 54.4707% ( 219) 00:09:14.599 11379.433 - 11439.011: 56.3764% ( 211) 00:09:14.599 11439.011 - 11498.589: 58.2009% ( 202) 00:09:14.599 11498.589 - 11558.167: 60.0614% ( 206) 00:09:14.599 11558.167 - 11617.745: 61.8407% ( 197) 00:09:14.599 11617.745 - 11677.324: 63.6922% ( 205) 00:09:14.599 11677.324 - 11736.902: 65.6160% ( 213) 00:09:14.599 11736.902 - 11796.480: 67.3862% ( 196) 00:09:14.599 11796.480 - 11856.058: 69.1655% ( 197) 00:09:14.599 11856.058 - 11915.636: 70.8905% ( 191) 00:09:14.599 11915.636 - 11975.215: 72.7150% ( 202) 00:09:14.599 11975.215 - 12034.793: 74.7110% ( 221) 00:09:14.599 12034.793 - 12094.371: 76.6167% ( 211) 00:09:14.599 12094.371 - 12153.949: 78.6037% ( 220) 00:09:14.599 12153.949 - 12213.527: 80.2294% ( 180) 00:09:14.599 12213.527 - 12273.105: 81.8100% ( 175) 00:09:14.599 12273.105 - 12332.684: 83.3905% ( 175) 00:09:14.599 12332.684 - 12392.262: 84.7272% ( 148) 00:09:14.599 12392.262 - 12451.840: 86.0368% ( 145) 00:09:14.599 12451.840 - 12511.418: 87.2832% ( 138) 00:09:14.599 12511.418 - 12570.996: 88.2496% ( 107) 00:09:14.599 12570.996 - 12630.575: 89.1438% ( 99) 00:09:14.599 12630.575 - 12690.153: 90.0921% ( 105) 00:09:14.599 12690.153 - 12749.731: 90.9140% ( 91) 00:09:14.599 12749.731 - 12809.309: 91.7720% ( 95) 00:09:14.599 12809.309 - 12868.887: 92.4855% ( 79) 00:09:14.599 12868.887 - 12928.465: 93.1720% ( 76) 00:09:14.599 12928.465 - 12988.044: 93.8764% ( 78) 00:09:14.599 12988.044 - 13047.622: 94.4274% ( 61) 00:09:14.599 13047.622 - 13107.200: 94.9693% ( 60) 00:09:14.599 13107.200 - 13166.778: 95.3848% ( 46) 00:09:14.599 13166.778 - 13226.356: 95.8002% ( 46) 00:09:14.599 13226.356 - 13285.935: 96.1344% ( 37) 00:09:14.599 13285.935 - 13345.513: 96.4234% ( 32) 00:09:14.599 13345.513 - 13405.091: 96.5770% ( 17) 00:09:14.599 13405.091 - 13464.669: 96.7395% ( 18) 00:09:14.599 13464.669 - 13524.247: 96.8840% ( 16) 00:09:14.599 13524.247 - 13583.825: 97.0737% ( 21) 00:09:14.599 13583.825 - 13643.404: 97.2543% ( 20) 00:09:14.599 13643.404 - 13702.982: 97.4079% ( 17) 00:09:14.599 13702.982 - 13762.560: 97.5704% ( 18) 00:09:14.599 13762.560 - 13822.138: 97.7150% ( 16) 00:09:14.599 13822.138 - 13881.716: 97.8685% ( 17) 00:09:14.599 13881.716 - 13941.295: 98.0311% ( 18) 00:09:14.599 13941.295 - 14000.873: 98.1485% ( 13) 00:09:14.599 14000.873 - 14060.451: 98.2569% ( 12) 00:09:14.599 14060.451 - 14120.029: 98.3652% ( 12) 00:09:14.599 14120.029 - 14179.607: 98.4827% ( 13) 00:09:14.599 14179.607 - 14239.185: 98.5910% ( 12) 00:09:14.599 14239.185 - 14298.764: 98.6543% ( 7) 00:09:14.599 14298.764 - 14358.342: 98.6904% ( 4) 00:09:14.599 14358.342 - 14417.920: 98.7265% ( 4) 00:09:14.599 14417.920 - 14477.498: 98.7536% ( 3) 00:09:14.599 14477.498 - 14537.076: 98.7807% ( 3) 00:09:14.599 14537.076 - 14596.655: 98.8078% ( 3) 00:09:14.599 14596.655 - 14656.233: 98.8439% ( 4) 00:09:14.599 20494.895 - 20614.051: 98.8530% ( 1) 00:09:14.599 20614.051 - 20733.207: 98.8981% ( 5) 00:09:14.599 20733.207 - 20852.364: 98.9704% ( 8) 00:09:14.599 20852.364 - 20971.520: 99.0607% ( 10) 00:09:14.599 20971.520 - 21090.676: 99.0968% ( 4) 00:09:14.599 21090.676 - 21209.833: 99.1149% ( 2) 00:09:14.599 21209.833 - 21328.989: 99.1420% ( 3) 00:09:14.599 21328.989 - 21448.145: 99.1600% ( 2) 00:09:14.599 21448.145 - 21567.302: 99.1871% ( 3) 00:09:14.599 21567.302 - 21686.458: 99.2142% ( 3) 00:09:14.599 21686.458 - 21805.615: 99.2323% ( 2) 00:09:14.599 21805.615 - 21924.771: 99.2594% ( 3) 00:09:14.599 21924.771 - 22043.927: 99.2865% ( 3) 00:09:14.599 22043.927 - 22163.084: 99.3136% ( 3) 00:09:14.599 22163.084 - 22282.240: 99.3407% ( 3) 00:09:14.599 22282.240 - 22401.396: 99.3587% ( 2) 00:09:14.599 22401.396 - 22520.553: 99.3858% ( 3) 00:09:14.599 22520.553 - 22639.709: 99.4129% ( 3) 00:09:14.599 22639.709 - 22758.865: 99.4220% ( 1) 00:09:14.599 26452.713 - 26571.869: 99.4400% ( 2) 00:09:14.599 26571.869 - 26691.025: 99.4762% ( 4) 00:09:14.599 26691.025 - 26810.182: 99.5033% ( 3) 00:09:14.599 27286.807 - 27405.964: 99.5123% ( 1) 00:09:14.599 28120.902 - 28240.058: 99.5213% ( 1) 00:09:14.599 28240.058 - 28359.215: 99.5484% ( 3) 00:09:14.599 28359.215 - 28478.371: 99.5665% ( 2) 00:09:14.599 28478.371 - 28597.527: 99.5845% ( 2) 00:09:14.599 28597.527 - 28716.684: 99.6116% ( 3) 00:09:14.599 28716.684 - 28835.840: 99.6387% ( 3) 00:09:14.599 28835.840 - 28954.996: 99.6568% ( 2) 00:09:14.599 28954.996 - 29074.153: 99.6839% ( 3) 00:09:14.599 29074.153 - 29193.309: 99.7110% ( 3) 00:09:14.599 29193.309 - 29312.465: 99.7381% ( 3) 00:09:14.599 29312.465 - 29431.622: 99.7561% ( 2) 00:09:14.599 29431.622 - 29550.778: 99.7832% ( 3) 00:09:14.600 29550.778 - 29669.935: 99.8103% ( 3) 00:09:14.600 29669.935 - 29789.091: 99.8284% ( 2) 00:09:14.600 29789.091 - 29908.247: 99.8555% ( 3) 00:09:14.600 29908.247 - 30027.404: 99.8826% ( 3) 00:09:14.600 30027.404 - 30146.560: 99.9097% ( 3) 00:09:14.600 30146.560 - 30265.716: 99.9458% ( 4) 00:09:14.600 30265.716 - 30384.873: 99.9729% ( 3) 00:09:14.600 30384.873 - 30504.029: 100.0000% ( 3) 00:09:14.600 00:09:14.600 ************************************ 00:09:14.600 END TEST nvme_perf 00:09:14.600 ************************************ 00:09:14.600 17:59:30 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:09:14.600 00:09:14.600 real 0m2.747s 00:09:14.600 user 0m2.340s 00:09:14.600 sys 0m0.288s 00:09:14.600 17:59:30 nvme.nvme_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:14.600 17:59:30 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:09:14.600 17:59:30 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:14.600 17:59:30 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:09:14.600 17:59:30 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:14.600 17:59:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.600 ************************************ 00:09:14.600 START TEST nvme_hello_world 00:09:14.600 ************************************ 00:09:14.600 17:59:30 nvme.nvme_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:09:14.858 Initializing NVMe Controllers 00:09:14.858 Attached to 0000:00:10.0 00:09:14.858 Namespace ID: 1 size: 6GB 00:09:14.858 Attached to 0000:00:11.0 00:09:14.858 Namespace ID: 1 size: 5GB 00:09:14.858 Attached to 0000:00:13.0 00:09:14.858 Namespace ID: 1 size: 1GB 00:09:14.858 Attached to 0000:00:12.0 00:09:14.858 Namespace ID: 1 size: 4GB 00:09:14.858 Namespace ID: 2 size: 4GB 00:09:14.858 Namespace ID: 3 size: 4GB 00:09:14.858 Initialization complete. 00:09:14.858 INFO: using host memory buffer for IO 00:09:14.858 Hello world! 00:09:14.858 INFO: using host memory buffer for IO 00:09:14.858 Hello world! 00:09:14.858 INFO: using host memory buffer for IO 00:09:14.858 Hello world! 00:09:14.858 INFO: using host memory buffer for IO 00:09:14.858 Hello world! 00:09:14.858 INFO: using host memory buffer for IO 00:09:14.858 Hello world! 00:09:14.858 INFO: using host memory buffer for IO 00:09:14.858 Hello world! 00:09:14.858 ************************************ 00:09:14.858 END TEST nvme_hello_world 00:09:14.858 ************************************ 00:09:14.858 00:09:14.858 real 0m0.409s 00:09:14.858 user 0m0.218s 00:09:14.858 sys 0m0.137s 00:09:14.858 17:59:31 nvme.nvme_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:14.858 17:59:31 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:09:14.858 17:59:31 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:14.858 17:59:31 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:14.858 17:59:31 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:14.858 17:59:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:14.858 ************************************ 00:09:14.858 START TEST nvme_sgl 00:09:14.858 ************************************ 00:09:14.858 17:59:31 nvme.nvme_sgl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:09:15.117 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:09:15.117 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:09:15.117 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:09:15.377 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:09:15.378 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:09:15.378 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:09:15.378 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:09:15.378 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:09:15.378 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:09:15.378 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:09:15.378 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:09:15.378 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:09:15.378 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:09:15.378 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:09:15.378 NVMe Readv/Writev Request test 00:09:15.378 Attached to 0000:00:10.0 00:09:15.378 Attached to 0000:00:11.0 00:09:15.378 Attached to 0000:00:13.0 00:09:15.378 Attached to 0000:00:12.0 00:09:15.378 0000:00:10.0: build_io_request_2 test passed 00:09:15.378 0000:00:10.0: build_io_request_4 test passed 00:09:15.378 0000:00:10.0: build_io_request_5 test passed 00:09:15.378 0000:00:10.0: build_io_request_6 test passed 00:09:15.378 0000:00:10.0: build_io_request_7 test passed 00:09:15.378 0000:00:10.0: build_io_request_10 test passed 00:09:15.378 0000:00:11.0: build_io_request_2 test passed 00:09:15.378 0000:00:11.0: build_io_request_4 test passed 00:09:15.378 0000:00:11.0: build_io_request_5 test passed 00:09:15.378 0000:00:11.0: build_io_request_6 test passed 00:09:15.378 0000:00:11.0: build_io_request_7 test passed 00:09:15.378 0000:00:11.0: build_io_request_10 test passed 00:09:15.378 Cleaning up... 00:09:15.378 ************************************ 00:09:15.378 END TEST nvme_sgl 00:09:15.378 ************************************ 00:09:15.378 00:09:15.378 real 0m0.417s 00:09:15.378 user 0m0.212s 00:09:15.378 sys 0m0.155s 00:09:15.378 17:59:31 nvme.nvme_sgl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:15.378 17:59:31 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:09:15.378 17:59:31 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:15.378 17:59:31 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:15.378 17:59:31 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:15.378 17:59:31 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.378 ************************************ 00:09:15.378 START TEST nvme_e2edp 00:09:15.378 ************************************ 00:09:15.378 17:59:31 nvme.nvme_e2edp -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:09:15.638 NVMe Write/Read with End-to-End data protection test 00:09:15.638 Attached to 0000:00:10.0 00:09:15.638 Attached to 0000:00:11.0 00:09:15.638 Attached to 0000:00:13.0 00:09:15.638 Attached to 0000:00:12.0 00:09:15.638 Cleaning up... 00:09:15.638 ************************************ 00:09:15.638 END TEST nvme_e2edp 00:09:15.638 ************************************ 00:09:15.638 00:09:15.638 real 0m0.338s 00:09:15.638 user 0m0.124s 00:09:15.638 sys 0m0.157s 00:09:15.638 17:59:32 nvme.nvme_e2edp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:15.638 17:59:32 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:09:15.898 17:59:32 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:15.898 17:59:32 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:15.898 17:59:32 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:15.898 17:59:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:15.898 ************************************ 00:09:15.898 START TEST nvme_reserve 00:09:15.898 ************************************ 00:09:15.898 17:59:32 nvme.nvme_reserve -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:09:16.157 ===================================================== 00:09:16.157 NVMe Controller at PCI bus 0, device 16, function 0 00:09:16.157 ===================================================== 00:09:16.157 Reservations: Not Supported 00:09:16.157 ===================================================== 00:09:16.157 NVMe Controller at PCI bus 0, device 17, function 0 00:09:16.157 ===================================================== 00:09:16.157 Reservations: Not Supported 00:09:16.157 ===================================================== 00:09:16.157 NVMe Controller at PCI bus 0, device 19, function 0 00:09:16.157 ===================================================== 00:09:16.157 Reservations: Not Supported 00:09:16.157 ===================================================== 00:09:16.157 NVMe Controller at PCI bus 0, device 18, function 0 00:09:16.157 ===================================================== 00:09:16.157 Reservations: Not Supported 00:09:16.157 Reservation test passed 00:09:16.157 ************************************ 00:09:16.157 END TEST nvme_reserve 00:09:16.157 ************************************ 00:09:16.157 00:09:16.157 real 0m0.356s 00:09:16.157 user 0m0.130s 00:09:16.157 sys 0m0.165s 00:09:16.157 17:59:32 nvme.nvme_reserve -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:16.157 17:59:32 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:09:16.157 17:59:32 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:16.157 17:59:32 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:16.157 17:59:32 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:16.157 17:59:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.157 ************************************ 00:09:16.157 START TEST nvme_err_injection 00:09:16.157 ************************************ 00:09:16.157 17:59:32 nvme.nvme_err_injection -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:09:16.417 NVMe Error Injection test 00:09:16.417 Attached to 0000:00:10.0 00:09:16.417 Attached to 0000:00:11.0 00:09:16.417 Attached to 0000:00:13.0 00:09:16.417 Attached to 0000:00:12.0 00:09:16.417 0000:00:10.0: get features failed as expected 00:09:16.417 0000:00:11.0: get features failed as expected 00:09:16.417 0000:00:13.0: get features failed as expected 00:09:16.417 0000:00:12.0: get features failed as expected 00:09:16.417 0000:00:10.0: get features successfully as expected 00:09:16.417 0000:00:11.0: get features successfully as expected 00:09:16.417 0000:00:13.0: get features successfully as expected 00:09:16.417 0000:00:12.0: get features successfully as expected 00:09:16.417 0000:00:10.0: read failed as expected 00:09:16.417 0000:00:11.0: read failed as expected 00:09:16.417 0000:00:13.0: read failed as expected 00:09:16.417 0000:00:12.0: read failed as expected 00:09:16.417 0000:00:10.0: read successfully as expected 00:09:16.417 0000:00:11.0: read successfully as expected 00:09:16.417 0000:00:13.0: read successfully as expected 00:09:16.417 0000:00:12.0: read successfully as expected 00:09:16.417 Cleaning up... 00:09:16.417 ************************************ 00:09:16.417 END TEST nvme_err_injection 00:09:16.417 ************************************ 00:09:16.417 00:09:16.417 real 0m0.315s 00:09:16.417 user 0m0.111s 00:09:16.417 sys 0m0.155s 00:09:16.417 17:59:32 nvme.nvme_err_injection -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:16.417 17:59:32 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:09:16.417 17:59:32 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:16.417 17:59:32 nvme -- common/autotest_common.sh@1103 -- # '[' 9 -le 1 ']' 00:09:16.417 17:59:32 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:16.417 17:59:32 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:16.417 ************************************ 00:09:16.417 START TEST nvme_overhead 00:09:16.417 ************************************ 00:09:16.417 17:59:32 nvme.nvme_overhead -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:09:17.794 Initializing NVMe Controllers 00:09:17.794 Attached to 0000:00:10.0 00:09:17.794 Attached to 0000:00:11.0 00:09:17.794 Attached to 0000:00:13.0 00:09:17.794 Attached to 0000:00:12.0 00:09:17.794 Initialization complete. Launching workers. 00:09:17.794 submit (in ns) avg, min, max = 16536.8, 14695.0, 108063.6 00:09:17.794 complete (in ns) avg, min, max = 10966.9, 9550.5, 106385.5 00:09:17.794 00:09:17.794 Submit histogram 00:09:17.794 ================ 00:09:17.794 Range in us Cumulative Count 00:09:17.794 14.662 - 14.720: 0.0085% ( 1) 00:09:17.794 14.720 - 14.778: 0.1102% ( 12) 00:09:17.794 14.778 - 14.836: 0.4321% ( 38) 00:09:17.794 14.836 - 14.895: 1.7031% ( 150) 00:09:17.794 14.895 - 15.011: 9.7695% ( 952) 00:09:17.794 15.011 - 15.127: 25.8939% ( 1903) 00:09:17.794 15.127 - 15.244: 40.4084% ( 1713) 00:09:17.794 15.244 - 15.360: 49.3815% ( 1059) 00:09:17.794 15.360 - 15.476: 55.1601% ( 682) 00:09:17.794 15.476 - 15.593: 58.8205% ( 432) 00:09:17.794 15.593 - 15.709: 61.2184% ( 283) 00:09:17.794 15.709 - 15.825: 63.3452% ( 251) 00:09:17.794 15.825 - 15.942: 64.9297% ( 187) 00:09:17.794 15.942 - 16.058: 66.4040% ( 174) 00:09:17.794 16.058 - 16.175: 67.8953% ( 176) 00:09:17.794 16.175 - 16.291: 69.1239% ( 145) 00:09:17.794 16.291 - 16.407: 70.2678% ( 135) 00:09:17.794 16.407 - 16.524: 70.9541% ( 81) 00:09:17.794 16.524 - 16.640: 71.4625% ( 60) 00:09:17.794 16.640 - 16.756: 71.8522% ( 46) 00:09:17.794 16.756 - 16.873: 72.0471% ( 23) 00:09:17.794 16.873 - 16.989: 72.2928% ( 29) 00:09:17.794 16.989 - 17.105: 72.5640% ( 32) 00:09:17.794 17.105 - 17.222: 72.7843% ( 26) 00:09:17.794 17.222 - 17.338: 72.9537% ( 20) 00:09:17.794 17.338 - 17.455: 73.0639% ( 13) 00:09:17.794 17.455 - 17.571: 73.1401% ( 9) 00:09:17.794 17.571 - 17.687: 73.8434% ( 83) 00:09:17.795 17.687 - 17.804: 76.1651% ( 274) 00:09:17.795 17.804 - 17.920: 79.2154% ( 360) 00:09:17.795 17.920 - 18.036: 81.9353% ( 321) 00:09:17.795 18.036 - 18.153: 84.3416% ( 284) 00:09:17.795 18.153 - 18.269: 86.2396% ( 224) 00:09:17.795 18.269 - 18.385: 87.5868% ( 159) 00:09:17.795 18.385 - 18.502: 88.5782% ( 117) 00:09:17.795 18.502 - 18.618: 89.2730% ( 82) 00:09:17.795 18.618 - 18.735: 89.9254% ( 77) 00:09:17.795 18.735 - 18.851: 90.5440% ( 73) 00:09:17.795 18.851 - 18.967: 91.2134% ( 79) 00:09:17.795 18.967 - 19.084: 91.7811% ( 67) 00:09:17.795 19.084 - 19.200: 92.3742% ( 70) 00:09:17.795 19.200 - 19.316: 92.8402% ( 55) 00:09:17.795 19.316 - 19.433: 93.1452% ( 36) 00:09:17.795 19.433 - 19.549: 93.4164% ( 32) 00:09:17.795 19.549 - 19.665: 93.5943% ( 21) 00:09:17.795 19.665 - 19.782: 93.7638% ( 20) 00:09:17.795 19.782 - 19.898: 93.8909% ( 15) 00:09:17.795 19.898 - 20.015: 94.0688% ( 21) 00:09:17.795 20.015 - 20.131: 94.1451% ( 9) 00:09:17.795 20.131 - 20.247: 94.2637% ( 14) 00:09:17.795 20.247 - 20.364: 94.3230% ( 7) 00:09:17.795 20.364 - 20.480: 94.4247% ( 12) 00:09:17.795 20.480 - 20.596: 94.4840% ( 7) 00:09:17.795 20.596 - 20.713: 94.5179% ( 4) 00:09:17.795 20.713 - 20.829: 94.6026% ( 10) 00:09:17.795 20.829 - 20.945: 94.6280% ( 3) 00:09:17.795 20.945 - 21.062: 94.6958% ( 8) 00:09:17.795 21.062 - 21.178: 94.8144% ( 14) 00:09:17.795 21.178 - 21.295: 94.9161% ( 12) 00:09:17.795 21.295 - 21.411: 95.0093% ( 11) 00:09:17.795 21.411 - 21.527: 95.1534% ( 17) 00:09:17.795 21.527 - 21.644: 95.2550% ( 12) 00:09:17.795 21.644 - 21.760: 95.3313% ( 9) 00:09:17.795 21.760 - 21.876: 95.4245% ( 11) 00:09:17.795 21.876 - 21.993: 95.5770% ( 18) 00:09:17.795 21.993 - 22.109: 95.7211% ( 17) 00:09:17.795 22.109 - 22.225: 95.8143% ( 11) 00:09:17.795 22.225 - 22.342: 95.9075% ( 11) 00:09:17.795 22.342 - 22.458: 95.9837% ( 9) 00:09:17.795 22.458 - 22.575: 96.0346% ( 6) 00:09:17.795 22.575 - 22.691: 96.0600% ( 3) 00:09:17.795 22.691 - 22.807: 96.1108% ( 6) 00:09:17.795 22.807 - 22.924: 96.2125% ( 12) 00:09:17.795 22.924 - 23.040: 96.2803% ( 8) 00:09:17.795 23.040 - 23.156: 96.3650% ( 10) 00:09:17.795 23.156 - 23.273: 96.4582% ( 11) 00:09:17.795 23.273 - 23.389: 96.5175% ( 7) 00:09:17.795 23.389 - 23.505: 96.6192% ( 12) 00:09:17.795 23.505 - 23.622: 96.7378% ( 14) 00:09:17.795 23.622 - 23.738: 96.7972% ( 7) 00:09:17.795 23.738 - 23.855: 96.8565% ( 7) 00:09:17.795 23.855 - 23.971: 96.9158% ( 7) 00:09:17.795 23.971 - 24.087: 97.0429% ( 15) 00:09:17.795 24.087 - 24.204: 97.1022% ( 7) 00:09:17.795 24.204 - 24.320: 97.1784% ( 9) 00:09:17.795 24.320 - 24.436: 97.2293% ( 6) 00:09:17.795 24.436 - 24.553: 97.3140% ( 10) 00:09:17.795 24.553 - 24.669: 97.3987% ( 10) 00:09:17.795 24.669 - 24.785: 97.4750% ( 9) 00:09:17.795 24.785 - 24.902: 97.5597% ( 10) 00:09:17.795 24.902 - 25.018: 97.6784% ( 14) 00:09:17.795 25.018 - 25.135: 97.6868% ( 1) 00:09:17.795 25.135 - 25.251: 97.7631% ( 9) 00:09:17.795 25.251 - 25.367: 97.8478% ( 10) 00:09:17.795 25.367 - 25.484: 97.9495% ( 12) 00:09:17.795 25.484 - 25.600: 98.0766% ( 15) 00:09:17.795 25.600 - 25.716: 98.1444% ( 8) 00:09:17.795 25.716 - 25.833: 98.2630% ( 14) 00:09:17.795 25.833 - 25.949: 98.3138% ( 6) 00:09:17.795 25.949 - 26.065: 98.4240% ( 13) 00:09:17.795 26.065 - 26.182: 98.4664% ( 5) 00:09:17.795 26.182 - 26.298: 98.5172% ( 6) 00:09:17.795 26.298 - 26.415: 98.5596% ( 5) 00:09:17.795 26.415 - 26.531: 98.6612% ( 12) 00:09:17.795 26.531 - 26.647: 98.7290% ( 8) 00:09:17.795 26.647 - 26.764: 98.7375% ( 1) 00:09:17.795 26.764 - 26.880: 98.7968% ( 7) 00:09:17.795 26.880 - 26.996: 98.8477% ( 6) 00:09:17.795 26.996 - 27.113: 98.8900% ( 5) 00:09:17.795 27.113 - 27.229: 98.8985% ( 1) 00:09:17.795 27.229 - 27.345: 98.9578% ( 7) 00:09:17.795 27.345 - 27.462: 99.0086% ( 6) 00:09:17.795 27.578 - 27.695: 99.0171% ( 1) 00:09:17.795 27.695 - 27.811: 99.0595% ( 5) 00:09:17.795 27.811 - 27.927: 99.0764% ( 2) 00:09:17.795 27.927 - 28.044: 99.1188% ( 5) 00:09:17.795 28.044 - 28.160: 99.1357% ( 2) 00:09:17.795 28.160 - 28.276: 99.1696% ( 4) 00:09:17.795 28.276 - 28.393: 99.1951% ( 3) 00:09:17.795 28.393 - 28.509: 99.2120% ( 2) 00:09:17.795 28.509 - 28.625: 99.2205% ( 1) 00:09:17.795 28.625 - 28.742: 99.2459% ( 3) 00:09:17.795 28.742 - 28.858: 99.2713% ( 3) 00:09:17.795 28.858 - 28.975: 99.2967% ( 3) 00:09:17.795 28.975 - 29.091: 99.3476% ( 6) 00:09:17.795 29.091 - 29.207: 99.3645% ( 2) 00:09:17.795 29.207 - 29.324: 99.3815% ( 2) 00:09:17.795 29.324 - 29.440: 99.3984% ( 2) 00:09:17.795 29.440 - 29.556: 99.4238% ( 3) 00:09:17.795 29.556 - 29.673: 99.4662% ( 5) 00:09:17.795 29.673 - 29.789: 99.4747% ( 1) 00:09:17.795 29.789 - 30.022: 99.5170% ( 5) 00:09:17.795 30.022 - 30.255: 99.5594% ( 5) 00:09:17.795 30.255 - 30.487: 99.5848% ( 3) 00:09:17.795 30.487 - 30.720: 99.6187% ( 4) 00:09:17.795 30.720 - 30.953: 99.6441% ( 3) 00:09:17.795 30.953 - 31.185: 99.6780% ( 4) 00:09:17.795 31.418 - 31.651: 99.7034% ( 3) 00:09:17.795 31.651 - 31.884: 99.7373% ( 4) 00:09:17.795 31.884 - 32.116: 99.7458% ( 1) 00:09:17.795 32.582 - 32.815: 99.7712% ( 3) 00:09:17.795 32.815 - 33.047: 99.7797% ( 1) 00:09:17.795 33.047 - 33.280: 99.7882% ( 1) 00:09:17.795 33.513 - 33.745: 99.8136% ( 3) 00:09:17.795 33.745 - 33.978: 99.8221% ( 1) 00:09:17.795 34.444 - 34.676: 99.8475% ( 3) 00:09:17.795 35.142 - 35.375: 99.8644% ( 2) 00:09:17.795 35.840 - 36.073: 99.8729% ( 1) 00:09:17.795 36.073 - 36.305: 99.8814% ( 1) 00:09:17.795 36.305 - 36.538: 99.8898% ( 1) 00:09:17.795 37.935 - 38.167: 99.9068% ( 2) 00:09:17.795 38.633 - 38.865: 99.9153% ( 1) 00:09:17.795 38.865 - 39.098: 99.9322% ( 2) 00:09:17.795 39.564 - 39.796: 99.9407% ( 1) 00:09:17.795 40.029 - 40.262: 99.9492% ( 1) 00:09:17.795 40.262 - 40.495: 99.9576% ( 1) 00:09:17.795 40.495 - 40.727: 99.9661% ( 1) 00:09:17.795 40.960 - 41.193: 99.9746% ( 1) 00:09:17.795 53.062 - 53.295: 99.9831% ( 1) 00:09:17.795 57.949 - 58.182: 99.9915% ( 1) 00:09:17.795 107.985 - 108.451: 100.0000% ( 1) 00:09:17.795 00:09:17.795 Complete histogram 00:09:17.795 ================== 00:09:17.795 Range in us Cumulative Count 00:09:17.795 9.542 - 9.600: 0.1440% ( 17) 00:09:17.795 9.600 - 9.658: 1.5082% ( 161) 00:09:17.795 9.658 - 9.716: 6.4989% ( 589) 00:09:17.795 9.716 - 9.775: 17.3022% ( 1275) 00:09:17.795 9.775 - 9.833: 30.3423% ( 1539) 00:09:17.796 9.833 - 9.891: 41.7895% ( 1351) 00:09:17.796 9.891 - 9.949: 50.3050% ( 1005) 00:09:17.796 9.949 - 10.007: 56.1769% ( 693) 00:09:17.796 10.007 - 10.065: 59.5069% ( 393) 00:09:17.796 10.065 - 10.124: 61.4133% ( 225) 00:09:17.796 10.124 - 10.182: 62.3962% ( 116) 00:09:17.796 10.182 - 10.240: 63.1588% ( 90) 00:09:17.796 10.240 - 10.298: 63.5824% ( 50) 00:09:17.796 10.298 - 10.356: 63.8282% ( 29) 00:09:17.796 10.356 - 10.415: 63.9892% ( 19) 00:09:17.796 10.415 - 10.473: 64.0908% ( 12) 00:09:17.796 10.473 - 10.531: 64.1586% ( 8) 00:09:17.796 10.531 - 10.589: 64.1925% ( 4) 00:09:17.796 10.589 - 10.647: 64.2264% ( 4) 00:09:17.796 10.647 - 10.705: 64.2518% ( 3) 00:09:17.796 10.705 - 10.764: 64.2688% ( 2) 00:09:17.796 10.764 - 10.822: 64.3111% ( 5) 00:09:17.796 10.822 - 10.880: 64.4213% ( 13) 00:09:17.796 10.880 - 10.938: 64.5907% ( 20) 00:09:17.796 10.938 - 10.996: 64.9297% ( 40) 00:09:17.796 10.996 - 11.055: 65.2347% ( 36) 00:09:17.796 11.055 - 11.113: 65.7939% ( 66) 00:09:17.796 11.113 - 11.171: 66.5735% ( 92) 00:09:17.796 11.171 - 11.229: 66.9971% ( 50) 00:09:17.796 11.229 - 11.287: 67.4801% ( 57) 00:09:17.796 11.287 - 11.345: 67.7597% ( 33) 00:09:17.796 11.345 - 11.404: 67.9461% ( 22) 00:09:17.796 11.404 - 11.462: 68.1240% ( 21) 00:09:17.796 11.462 - 11.520: 68.2681% ( 17) 00:09:17.796 11.520 - 11.578: 68.5308% ( 31) 00:09:17.796 11.578 - 11.636: 69.3696% ( 99) 00:09:17.796 11.636 - 11.695: 70.8948% ( 180) 00:09:17.796 11.695 - 11.753: 72.9453% ( 242) 00:09:17.796 11.753 - 11.811: 75.4703% ( 298) 00:09:17.796 11.811 - 11.869: 78.3935% ( 345) 00:09:17.796 11.869 - 11.927: 81.1218% ( 322) 00:09:17.796 11.927 - 11.985: 83.0961% ( 233) 00:09:17.796 11.985 - 12.044: 84.3840% ( 152) 00:09:17.796 12.044 - 12.102: 85.3923% ( 119) 00:09:17.796 12.102 - 12.160: 86.1125% ( 85) 00:09:17.796 12.160 - 12.218: 86.6124% ( 59) 00:09:17.796 12.218 - 12.276: 86.9683% ( 42) 00:09:17.796 12.276 - 12.335: 87.3072% ( 40) 00:09:17.796 12.335 - 12.393: 87.6546% ( 41) 00:09:17.796 12.393 - 12.451: 87.9766% ( 38) 00:09:17.796 12.451 - 12.509: 88.4426% ( 55) 00:09:17.796 12.509 - 12.567: 88.8070% ( 43) 00:09:17.796 12.567 - 12.625: 89.2137% ( 48) 00:09:17.796 12.625 - 12.684: 89.7136% ( 59) 00:09:17.796 12.684 - 12.742: 90.3237% ( 72) 00:09:17.796 12.742 - 12.800: 90.8660% ( 64) 00:09:17.796 12.800 - 12.858: 91.3743% ( 60) 00:09:17.796 12.858 - 12.916: 91.7726% ( 47) 00:09:17.796 12.916 - 12.975: 92.1115% ( 40) 00:09:17.796 12.975 - 13.033: 92.3826% ( 32) 00:09:17.796 13.033 - 13.091: 92.5691% ( 22) 00:09:17.796 13.091 - 13.149: 92.8148% ( 29) 00:09:17.796 13.149 - 13.207: 93.0774% ( 31) 00:09:17.796 13.207 - 13.265: 93.3232% ( 29) 00:09:17.796 13.265 - 13.324: 93.5519% ( 27) 00:09:17.796 13.324 - 13.382: 93.6790% ( 15) 00:09:17.796 13.382 - 13.440: 93.7977% ( 14) 00:09:17.796 13.440 - 13.498: 93.9925% ( 23) 00:09:17.796 13.498 - 13.556: 94.0857% ( 11) 00:09:17.796 13.556 - 13.615: 94.2044% ( 14) 00:09:17.796 13.615 - 13.673: 94.2637% ( 7) 00:09:17.796 13.673 - 13.731: 94.3399% ( 9) 00:09:17.796 13.731 - 13.789: 94.4331% ( 11) 00:09:17.796 13.789 - 13.847: 94.5094% ( 9) 00:09:17.796 13.847 - 13.905: 94.5687% ( 7) 00:09:17.796 13.905 - 13.964: 94.6958% ( 15) 00:09:17.796 13.964 - 14.022: 94.7551% ( 7) 00:09:17.796 14.022 - 14.080: 94.8483% ( 11) 00:09:17.796 14.080 - 14.138: 94.9415% ( 11) 00:09:17.796 14.138 - 14.196: 95.0347% ( 11) 00:09:17.796 14.196 - 14.255: 95.0771% ( 5) 00:09:17.796 14.255 - 14.313: 95.1025% ( 3) 00:09:17.796 14.313 - 14.371: 95.2127% ( 13) 00:09:17.796 14.371 - 14.429: 95.2805% ( 8) 00:09:17.796 14.429 - 14.487: 95.3737% ( 11) 00:09:17.796 14.487 - 14.545: 95.4499% ( 9) 00:09:17.796 14.545 - 14.604: 95.5262% ( 9) 00:09:17.796 14.604 - 14.662: 95.6448% ( 14) 00:09:17.796 14.662 - 14.720: 95.7550% ( 13) 00:09:17.796 14.720 - 14.778: 95.8905% ( 16) 00:09:17.796 14.778 - 14.836: 96.0430% ( 18) 00:09:17.796 14.836 - 14.895: 96.1532% ( 13) 00:09:17.796 14.895 - 15.011: 96.4836% ( 39) 00:09:17.796 15.011 - 15.127: 96.6701% ( 22) 00:09:17.796 15.127 - 15.244: 96.8141% ( 17) 00:09:17.796 15.244 - 15.360: 96.9497% ( 16) 00:09:17.796 15.360 - 15.476: 97.0259% ( 9) 00:09:17.796 15.476 - 15.593: 97.0768% ( 6) 00:09:17.796 15.593 - 15.709: 97.1530% ( 9) 00:09:17.796 15.709 - 15.825: 97.2039% ( 6) 00:09:17.796 15.825 - 15.942: 97.2462% ( 5) 00:09:17.796 15.942 - 16.058: 97.2716% ( 3) 00:09:17.796 16.058 - 16.175: 97.3055% ( 4) 00:09:17.796 16.175 - 16.291: 97.3479% ( 5) 00:09:17.796 16.291 - 16.407: 97.4411% ( 11) 00:09:17.796 16.407 - 16.524: 97.4920% ( 6) 00:09:17.796 16.524 - 16.640: 97.5682% ( 9) 00:09:17.796 16.640 - 16.756: 97.5936% ( 3) 00:09:17.796 16.756 - 16.873: 97.6190% ( 3) 00:09:17.796 16.873 - 16.989: 97.6529% ( 4) 00:09:17.796 16.989 - 17.105: 97.6699% ( 2) 00:09:17.796 17.105 - 17.222: 97.7123% ( 5) 00:09:17.796 17.222 - 17.338: 97.7800% ( 8) 00:09:17.796 17.338 - 17.455: 97.8563% ( 9) 00:09:17.796 17.455 - 17.571: 97.9071% ( 6) 00:09:17.796 17.571 - 17.687: 97.9580% ( 6) 00:09:17.796 17.687 - 17.804: 98.0597% ( 12) 00:09:17.796 17.804 - 17.920: 98.1105% ( 6) 00:09:17.796 17.920 - 18.036: 98.1613% ( 6) 00:09:17.796 18.036 - 18.153: 98.1867% ( 3) 00:09:17.796 18.153 - 18.269: 98.2461% ( 7) 00:09:17.796 18.269 - 18.385: 98.2884% ( 5) 00:09:17.796 18.385 - 18.502: 98.3223% ( 4) 00:09:17.796 18.502 - 18.618: 98.3562% ( 4) 00:09:17.796 18.618 - 18.735: 98.3986% ( 5) 00:09:17.796 18.735 - 18.851: 98.4155% ( 2) 00:09:17.796 18.851 - 18.967: 98.4494% ( 4) 00:09:17.796 18.967 - 19.084: 98.4833% ( 4) 00:09:17.796 19.084 - 19.200: 98.5003% ( 2) 00:09:17.796 19.200 - 19.316: 98.5087% ( 1) 00:09:17.796 19.316 - 19.433: 98.5426% ( 4) 00:09:17.796 19.433 - 19.549: 98.5850% ( 5) 00:09:17.796 19.549 - 19.665: 98.6104% ( 3) 00:09:17.796 19.665 - 19.782: 98.6528% ( 5) 00:09:17.796 19.782 - 19.898: 98.7036% ( 6) 00:09:17.796 19.898 - 20.015: 98.7375% ( 4) 00:09:17.796 20.015 - 20.131: 98.7883% ( 6) 00:09:17.796 20.131 - 20.247: 98.8392% ( 6) 00:09:17.796 20.247 - 20.364: 98.8815% ( 5) 00:09:17.796 20.364 - 20.480: 98.8900% ( 1) 00:09:17.796 20.480 - 20.596: 98.9239% ( 4) 00:09:17.796 20.596 - 20.713: 99.0002% ( 9) 00:09:17.796 20.713 - 20.829: 99.0341% ( 4) 00:09:17.796 20.829 - 20.945: 99.0595% ( 3) 00:09:17.796 20.945 - 21.062: 99.0849% ( 3) 00:09:17.796 21.062 - 21.178: 99.1018% ( 2) 00:09:17.796 21.178 - 21.295: 99.1442% ( 5) 00:09:17.796 21.295 - 21.411: 99.1527% ( 1) 00:09:17.796 21.411 - 21.527: 99.1696% ( 2) 00:09:17.796 21.527 - 21.644: 99.2120% ( 5) 00:09:17.797 21.644 - 21.760: 99.2713% ( 7) 00:09:17.797 21.760 - 21.876: 99.2883% ( 2) 00:09:17.797 21.876 - 21.993: 99.3137% ( 3) 00:09:17.797 21.993 - 22.109: 99.3391% ( 3) 00:09:17.797 22.109 - 22.225: 99.3645% ( 3) 00:09:17.797 22.225 - 22.342: 99.3815% ( 2) 00:09:17.797 22.342 - 22.458: 99.3899% ( 1) 00:09:17.797 22.458 - 22.575: 99.4154% ( 3) 00:09:17.797 22.575 - 22.691: 99.4323% ( 2) 00:09:17.797 22.691 - 22.807: 99.4747% ( 5) 00:09:17.797 22.807 - 22.924: 99.4831% ( 1) 00:09:17.797 23.040 - 23.156: 99.5001% ( 2) 00:09:17.797 23.156 - 23.273: 99.5086% ( 1) 00:09:17.797 23.273 - 23.389: 99.5170% ( 1) 00:09:17.797 23.389 - 23.505: 99.5509% ( 4) 00:09:17.797 23.622 - 23.738: 99.5594% ( 1) 00:09:17.797 23.738 - 23.855: 99.5679% ( 1) 00:09:17.797 23.971 - 24.087: 99.5763% ( 1) 00:09:17.797 24.320 - 24.436: 99.5848% ( 1) 00:09:17.797 24.436 - 24.553: 99.6102% ( 3) 00:09:17.797 24.553 - 24.669: 99.6187% ( 1) 00:09:17.797 24.669 - 24.785: 99.6357% ( 2) 00:09:17.797 24.902 - 25.018: 99.6526% ( 2) 00:09:17.797 25.018 - 25.135: 99.6611% ( 1) 00:09:17.797 25.135 - 25.251: 99.6695% ( 1) 00:09:17.797 25.251 - 25.367: 99.6865% ( 2) 00:09:17.797 25.484 - 25.600: 99.6950% ( 1) 00:09:17.797 25.716 - 25.833: 99.7204% ( 3) 00:09:17.797 25.833 - 25.949: 99.7289% ( 1) 00:09:17.797 25.949 - 26.065: 99.7373% ( 1) 00:09:17.797 26.065 - 26.182: 99.7543% ( 2) 00:09:17.797 26.415 - 26.531: 99.7797% ( 3) 00:09:17.797 26.531 - 26.647: 99.7882% ( 1) 00:09:17.797 26.647 - 26.764: 99.7966% ( 1) 00:09:17.797 26.764 - 26.880: 99.8051% ( 1) 00:09:17.797 27.113 - 27.229: 99.8136% ( 1) 00:09:17.797 27.345 - 27.462: 99.8221% ( 1) 00:09:17.797 27.578 - 27.695: 99.8560% ( 4) 00:09:17.797 28.160 - 28.276: 99.8644% ( 1) 00:09:17.797 30.720 - 30.953: 99.8729% ( 1) 00:09:17.797 31.884 - 32.116: 99.8814% ( 1) 00:09:17.797 32.815 - 33.047: 99.8898% ( 1) 00:09:17.797 34.211 - 34.444: 99.8983% ( 1) 00:09:17.797 36.305 - 36.538: 99.9068% ( 1) 00:09:17.797 39.331 - 39.564: 99.9153% ( 1) 00:09:17.797 39.564 - 39.796: 99.9237% ( 1) 00:09:17.797 40.727 - 40.960: 99.9322% ( 1) 00:09:17.797 47.476 - 47.709: 99.9407% ( 1) 00:09:17.797 49.338 - 49.571: 99.9492% ( 1) 00:09:17.797 52.596 - 52.829: 99.9576% ( 1) 00:09:17.797 60.044 - 60.509: 99.9661% ( 1) 00:09:17.797 61.440 - 61.905: 99.9746% ( 1) 00:09:17.797 70.749 - 71.215: 99.9831% ( 1) 00:09:17.797 87.505 - 87.971: 99.9915% ( 1) 00:09:17.797 106.124 - 106.589: 100.0000% ( 1) 00:09:17.797 00:09:17.797 ************************************ 00:09:17.797 END TEST nvme_overhead 00:09:17.797 ************************************ 00:09:17.797 00:09:17.797 real 0m1.315s 00:09:17.797 user 0m1.116s 00:09:17.797 sys 0m0.147s 00:09:17.797 17:59:34 nvme.nvme_overhead -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:17.797 17:59:34 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:09:17.797 17:59:34 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:17.797 17:59:34 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:09:17.797 17:59:34 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:17.797 17:59:34 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:17.797 ************************************ 00:09:17.797 START TEST nvme_arbitration 00:09:17.797 ************************************ 00:09:17.797 17:59:34 nvme.nvme_arbitration -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:09:21.985 Initializing NVMe Controllers 00:09:21.985 Attached to 0000:00:10.0 00:09:21.985 Attached to 0000:00:11.0 00:09:21.985 Attached to 0000:00:13.0 00:09:21.985 Attached to 0000:00:12.0 00:09:21.985 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:09:21.985 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:09:21.985 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:09:21.985 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:09:21.985 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:09:21.985 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:09:21.985 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:09:21.985 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:09:21.985 Initialization complete. Launching workers. 00:09:21.985 Starting thread on core 1 with urgent priority queue 00:09:21.985 Starting thread on core 2 with urgent priority queue 00:09:21.985 Starting thread on core 3 with urgent priority queue 00:09:21.985 Starting thread on core 0 with urgent priority queue 00:09:21.985 QEMU NVMe Ctrl (12340 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:09:21.985 QEMU NVMe Ctrl (12342 ) core 0: 704.00 IO/s 142.05 secs/100000 ios 00:09:21.985 QEMU NVMe Ctrl (12341 ) core 1: 725.33 IO/s 137.87 secs/100000 ios 00:09:21.985 QEMU NVMe Ctrl (12342 ) core 1: 725.33 IO/s 137.87 secs/100000 ios 00:09:21.985 QEMU NVMe Ctrl (12343 ) core 2: 704.00 IO/s 142.05 secs/100000 ios 00:09:21.985 QEMU NVMe Ctrl (12342 ) core 3: 469.33 IO/s 213.07 secs/100000 ios 00:09:21.985 ======================================================== 00:09:21.985 00:09:21.985 ************************************ 00:09:21.985 END TEST nvme_arbitration 00:09:21.985 ************************************ 00:09:21.985 00:09:21.985 real 0m3.478s 00:09:21.985 user 0m9.439s 00:09:21.985 sys 0m0.177s 00:09:21.985 17:59:37 nvme.nvme_arbitration -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:21.985 17:59:37 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:09:21.985 17:59:37 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:21.985 17:59:37 nvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:09:21.985 17:59:37 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:21.985 17:59:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.985 ************************************ 00:09:21.985 START TEST nvme_single_aen 00:09:21.985 ************************************ 00:09:21.985 17:59:37 nvme.nvme_single_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:09:21.985 Asynchronous Event Request test 00:09:21.985 Attached to 0000:00:10.0 00:09:21.985 Attached to 0000:00:11.0 00:09:21.985 Attached to 0000:00:13.0 00:09:21.985 Attached to 0000:00:12.0 00:09:21.985 Reset controller to setup AER completions for this process 00:09:21.985 Registering asynchronous event callbacks... 00:09:21.985 Getting orig temperature thresholds of all controllers 00:09:21.985 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:21.985 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:21.985 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:21.985 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:09:21.985 Setting all controllers temperature threshold low to trigger AER 00:09:21.985 Waiting for all controllers temperature threshold to be set lower 00:09:21.985 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:21.985 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:09:21.985 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:21.985 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:09:21.985 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:21.985 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:09:21.985 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:09:21.985 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:09:21.985 Waiting for all controllers to trigger AER and reset threshold 00:09:21.985 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.985 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.985 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.985 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:09:21.985 Cleaning up... 00:09:21.985 ************************************ 00:09:21.985 END TEST nvme_single_aen 00:09:21.985 ************************************ 00:09:21.985 00:09:21.985 real 0m0.377s 00:09:21.985 user 0m0.135s 00:09:21.985 sys 0m0.184s 00:09:21.985 17:59:38 nvme.nvme_single_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:21.985 17:59:38 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:09:21.985 17:59:38 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:09:21.985 17:59:38 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:21.985 17:59:38 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:21.985 17:59:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:09:21.985 ************************************ 00:09:21.985 START TEST nvme_doorbell_aers 00:09:21.985 ************************************ 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1127 -- # nvme_doorbell_aers 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # bdfs=() 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1496 -- # local bdfs 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:21.985 17:59:38 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:09:22.244 [2024-10-28 17:59:38.557102] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:09:32.209 Executing: test_write_invalid_db 00:09:32.209 Waiting for AER completion... 00:09:32.209 Failure: test_write_invalid_db 00:09:32.209 00:09:32.209 Executing: test_invalid_db_write_overflow_sq 00:09:32.209 Waiting for AER completion... 00:09:32.209 Failure: test_invalid_db_write_overflow_sq 00:09:32.209 00:09:32.209 Executing: test_invalid_db_write_overflow_cq 00:09:32.209 Waiting for AER completion... 00:09:32.209 Failure: test_invalid_db_write_overflow_cq 00:09:32.209 00:09:32.209 17:59:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:32.209 17:59:48 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:09:32.209 [2024-10-28 17:59:48.646721] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:09:42.179 Executing: test_write_invalid_db 00:09:42.179 Waiting for AER completion... 00:09:42.179 Failure: test_write_invalid_db 00:09:42.179 00:09:42.179 Executing: test_invalid_db_write_overflow_sq 00:09:42.179 Waiting for AER completion... 00:09:42.179 Failure: test_invalid_db_write_overflow_sq 00:09:42.179 00:09:42.179 Executing: test_invalid_db_write_overflow_cq 00:09:42.179 Waiting for AER completion... 00:09:42.179 Failure: test_invalid_db_write_overflow_cq 00:09:42.179 00:09:42.179 17:59:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:42.179 17:59:58 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:09:42.179 [2024-10-28 17:59:58.647061] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:09:52.181 Executing: test_write_invalid_db 00:09:52.181 Waiting for AER completion... 00:09:52.181 Failure: test_write_invalid_db 00:09:52.181 00:09:52.181 Executing: test_invalid_db_write_overflow_sq 00:09:52.181 Waiting for AER completion... 00:09:52.181 Failure: test_invalid_db_write_overflow_sq 00:09:52.181 00:09:52.181 Executing: test_invalid_db_write_overflow_cq 00:09:52.181 Waiting for AER completion... 00:09:52.181 Failure: test_invalid_db_write_overflow_cq 00:09:52.181 00:09:52.181 18:00:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:09:52.181 18:00:08 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:09:52.439 [2024-10-28 18:00:08.728828] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 Executing: test_write_invalid_db 00:10:02.399 Waiting for AER completion... 00:10:02.399 Failure: test_write_invalid_db 00:10:02.399 00:10:02.399 Executing: test_invalid_db_write_overflow_sq 00:10:02.399 Waiting for AER completion... 00:10:02.399 Failure: test_invalid_db_write_overflow_sq 00:10:02.399 00:10:02.399 Executing: test_invalid_db_write_overflow_cq 00:10:02.399 Waiting for AER completion... 00:10:02.399 Failure: test_invalid_db_write_overflow_cq 00:10:02.399 00:10:02.399 ************************************ 00:10:02.399 END TEST nvme_doorbell_aers 00:10:02.399 ************************************ 00:10:02.399 00:10:02.399 real 0m40.246s 00:10:02.399 user 0m34.153s 00:10:02.399 sys 0m5.716s 00:10:02.399 18:00:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.399 18:00:18 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:10:02.399 18:00:18 nvme -- nvme/nvme.sh@97 -- # uname 00:10:02.399 18:00:18 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:10:02.399 18:00:18 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:02.399 18:00:18 nvme -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:10:02.399 18:00:18 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:02.399 18:00:18 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.399 ************************************ 00:10:02.399 START TEST nvme_multi_aen 00:10:02.399 ************************************ 00:10:02.399 18:00:18 nvme.nvme_multi_aen -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:10:02.399 [2024-10-28 18:00:18.758125] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 [2024-10-28 18:00:18.758220] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 [2024-10-28 18:00:18.758243] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 [2024-10-28 18:00:18.759904] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 [2024-10-28 18:00:18.759952] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 [2024-10-28 18:00:18.759970] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 [2024-10-28 18:00:18.761400] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 [2024-10-28 18:00:18.761445] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 [2024-10-28 18:00:18.761466] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 [2024-10-28 18:00:18.762909] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 [2024-10-28 18:00:18.762953] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 [2024-10-28 18:00:18.762983] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64599) is not found. Dropping the request. 00:10:02.399 Child process pid: 65111 00:10:02.657 [Child] Asynchronous Event Request test 00:10:02.657 [Child] Attached to 0000:00:10.0 00:10:02.657 [Child] Attached to 0000:00:11.0 00:10:02.657 [Child] Attached to 0000:00:13.0 00:10:02.657 [Child] Attached to 0000:00:12.0 00:10:02.657 [Child] Registering asynchronous event callbacks... 00:10:02.657 [Child] Getting orig temperature thresholds of all controllers 00:10:02.657 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.657 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.657 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.657 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.657 [Child] Waiting for all controllers to trigger AER and reset threshold 00:10:02.657 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.657 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.657 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.657 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.657 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.657 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.657 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.657 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.657 [Child] Cleaning up... 00:10:02.657 Asynchronous Event Request test 00:10:02.657 Attached to 0000:00:10.0 00:10:02.657 Attached to 0000:00:11.0 00:10:02.657 Attached to 0000:00:13.0 00:10:02.657 Attached to 0000:00:12.0 00:10:02.657 Reset controller to setup AER completions for this process 00:10:02.657 Registering asynchronous event callbacks... 00:10:02.657 Getting orig temperature thresholds of all controllers 00:10:02.657 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.657 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.657 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.657 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:10:02.657 Setting all controllers temperature threshold low to trigger AER 00:10:02.657 Waiting for all controllers temperature threshold to be set lower 00:10:02.657 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.657 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:10:02.657 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.657 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:10:02.657 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.657 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:10:02.657 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:10:02.657 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:10:02.657 Waiting for all controllers to trigger AER and reset threshold 00:10:02.657 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.657 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.657 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.657 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:10:02.657 Cleaning up... 00:10:02.657 ************************************ 00:10:02.657 END TEST nvme_multi_aen 00:10:02.657 ************************************ 00:10:02.657 00:10:02.657 real 0m0.577s 00:10:02.657 user 0m0.199s 00:10:02.657 sys 0m0.272s 00:10:02.657 18:00:19 nvme.nvme_multi_aen -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:02.657 18:00:19 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:10:02.657 18:00:19 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:02.657 18:00:19 nvme -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:02.657 18:00:19 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:02.657 18:00:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:02.657 ************************************ 00:10:02.657 START TEST nvme_startup 00:10:02.657 ************************************ 00:10:02.657 18:00:19 nvme.nvme_startup -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:10:03.223 Initializing NVMe Controllers 00:10:03.223 Attached to 0000:00:10.0 00:10:03.223 Attached to 0000:00:11.0 00:10:03.223 Attached to 0000:00:13.0 00:10:03.223 Attached to 0000:00:12.0 00:10:03.223 Initialization complete. 00:10:03.223 Time used:204812.938 (us). 00:10:03.223 ************************************ 00:10:03.223 END TEST nvme_startup 00:10:03.223 ************************************ 00:10:03.223 00:10:03.223 real 0m0.289s 00:10:03.223 user 0m0.099s 00:10:03.223 sys 0m0.136s 00:10:03.223 18:00:19 nvme.nvme_startup -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:03.223 18:00:19 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:10:03.223 18:00:19 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:10:03.223 18:00:19 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:03.223 18:00:19 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:03.223 18:00:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:03.223 ************************************ 00:10:03.223 START TEST nvme_multi_secondary 00:10:03.223 ************************************ 00:10:03.223 18:00:19 nvme.nvme_multi_secondary -- common/autotest_common.sh@1127 -- # nvme_multi_secondary 00:10:03.223 18:00:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65167 00:10:03.223 18:00:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:10:03.223 18:00:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65168 00:10:03.223 18:00:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:03.223 18:00:19 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:10:06.501 Initializing NVMe Controllers 00:10:06.501 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:06.501 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:06.501 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:06.501 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:06.501 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:06.501 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:06.501 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:06.501 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:06.501 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:06.501 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:06.501 Initialization complete. Launching workers. 00:10:06.501 ======================================================== 00:10:06.501 Latency(us) 00:10:06.501 Device Information : IOPS MiB/s Average min max 00:10:06.501 PCIE (0000:00:10.0) NSID 1 from core 1: 5362.18 20.95 2981.96 1087.98 6185.00 00:10:06.501 PCIE (0000:00:11.0) NSID 1 from core 1: 5362.18 20.95 2983.51 1118.99 6309.29 00:10:06.501 PCIE (0000:00:13.0) NSID 1 from core 1: 5362.18 20.95 2983.70 1124.25 6655.59 00:10:06.501 PCIE (0000:00:12.0) NSID 1 from core 1: 5362.18 20.95 2984.02 1138.23 6176.10 00:10:06.501 PCIE (0000:00:12.0) NSID 2 from core 1: 5362.18 20.95 2984.23 1132.84 5999.71 00:10:06.501 PCIE (0000:00:12.0) NSID 3 from core 1: 5362.18 20.95 2984.35 1134.09 5902.15 00:10:06.501 ======================================================== 00:10:06.501 Total : 32173.09 125.68 2983.63 1087.98 6655.59 00:10:06.501 00:10:06.759 Initializing NVMe Controllers 00:10:06.759 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:06.759 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:06.759 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:06.759 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:06.759 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:06.759 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:06.759 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:06.759 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:06.759 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:06.759 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:06.759 Initialization complete. Launching workers. 00:10:06.759 ======================================================== 00:10:06.759 Latency(us) 00:10:06.759 Device Information : IOPS MiB/s Average min max 00:10:06.759 PCIE (0000:00:10.0) NSID 1 from core 2: 2380.56 9.30 6718.11 1798.30 14353.08 00:10:06.759 PCIE (0000:00:11.0) NSID 1 from core 2: 2380.56 9.30 6720.71 1539.17 14346.21 00:10:06.759 PCIE (0000:00:13.0) NSID 1 from core 2: 2380.56 9.30 6720.62 1700.69 16691.39 00:10:06.759 PCIE (0000:00:12.0) NSID 1 from core 2: 2380.56 9.30 6720.51 1724.08 13550.52 00:10:06.759 PCIE (0000:00:12.0) NSID 2 from core 2: 2380.56 9.30 6720.42 1529.79 13455.81 00:10:06.759 PCIE (0000:00:12.0) NSID 3 from core 2: 2380.56 9.30 6720.34 1383.01 14905.67 00:10:06.759 ======================================================== 00:10:06.759 Total : 14283.37 55.79 6720.12 1383.01 16691.39 00:10:06.759 00:10:06.759 18:00:23 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65167 00:10:08.657 Initializing NVMe Controllers 00:10:08.657 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:08.657 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:08.657 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:08.657 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:08.657 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:08.657 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:08.657 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:08.657 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:08.657 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:08.657 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:08.657 Initialization complete. Launching workers. 00:10:08.657 ======================================================== 00:10:08.657 Latency(us) 00:10:08.657 Device Information : IOPS MiB/s Average min max 00:10:08.657 PCIE (0000:00:10.0) NSID 1 from core 0: 7585.01 29.63 2107.62 939.72 17974.24 00:10:08.657 PCIE (0000:00:11.0) NSID 1 from core 0: 7585.01 29.63 2108.76 979.75 17908.53 00:10:08.657 PCIE (0000:00:13.0) NSID 1 from core 0: 7585.01 29.63 2108.64 949.20 17974.02 00:10:08.657 PCIE (0000:00:12.0) NSID 1 from core 0: 7585.01 29.63 2108.50 939.61 17583.51 00:10:08.657 PCIE (0000:00:12.0) NSID 2 from core 0: 7585.01 29.63 2108.39 957.96 17717.86 00:10:08.657 PCIE (0000:00:12.0) NSID 3 from core 0: 7585.01 29.63 2108.25 966.61 17844.90 00:10:08.657 ======================================================== 00:10:08.657 Total : 45510.06 177.77 2108.36 939.61 17974.24 00:10:08.657 00:10:08.657 18:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65168 00:10:08.657 18:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65243 00:10:08.657 18:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:10:08.657 18:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65244 00:10:08.657 18:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:10:08.657 18:00:24 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:10:11.981 Initializing NVMe Controllers 00:10:11.982 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:11.982 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:11.982 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:11.982 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:11.982 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:10:11.982 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:10:11.982 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:10:11.982 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:10:11.982 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:10:11.982 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:10:11.982 Initialization complete. Launching workers. 00:10:11.982 ======================================================== 00:10:11.982 Latency(us) 00:10:11.982 Device Information : IOPS MiB/s Average min max 00:10:11.982 PCIE (0000:00:10.0) NSID 1 from core 0: 5063.00 19.78 3158.22 953.81 9571.00 00:10:11.982 PCIE (0000:00:11.0) NSID 1 from core 0: 5063.00 19.78 3159.76 972.50 9456.22 00:10:11.982 PCIE (0000:00:13.0) NSID 1 from core 0: 5063.00 19.78 3159.76 990.25 9463.34 00:10:11.982 PCIE (0000:00:12.0) NSID 1 from core 0: 5063.00 19.78 3159.85 986.63 9607.68 00:10:11.982 PCIE (0000:00:12.0) NSID 2 from core 0: 5063.00 19.78 3159.96 984.34 9823.33 00:10:11.982 PCIE (0000:00:12.0) NSID 3 from core 0: 5063.00 19.78 3160.05 985.35 9729.15 00:10:11.982 ======================================================== 00:10:11.982 Total : 30377.98 118.66 3159.60 953.81 9823.33 00:10:11.982 00:10:11.982 Initializing NVMe Controllers 00:10:11.982 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:11.982 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:11.982 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:11.982 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:11.982 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:10:11.982 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:10:11.982 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:10:11.982 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:10:11.982 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:10:11.982 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:10:11.982 Initialization complete. Launching workers. 00:10:11.982 ======================================================== 00:10:11.982 Latency(us) 00:10:11.982 Device Information : IOPS MiB/s Average min max 00:10:11.982 PCIE (0000:00:10.0) NSID 1 from core 1: 5187.94 20.27 3082.12 1048.66 11921.09 00:10:11.982 PCIE (0000:00:11.0) NSID 1 from core 1: 5187.94 20.27 3083.68 1070.44 11863.31 00:10:11.982 PCIE (0000:00:13.0) NSID 1 from core 1: 5187.94 20.27 3083.66 1080.25 11825.85 00:10:11.982 PCIE (0000:00:12.0) NSID 1 from core 1: 5187.94 20.27 3083.74 1292.69 12083.46 00:10:11.982 PCIE (0000:00:12.0) NSID 2 from core 1: 5187.94 20.27 3083.80 1288.31 12647.94 00:10:11.982 PCIE (0000:00:12.0) NSID 3 from core 1: 5187.94 20.27 3083.73 1103.17 12796.45 00:10:11.982 ======================================================== 00:10:11.982 Total : 31127.66 121.59 3083.46 1048.66 12796.45 00:10:11.982 00:10:13.881 Initializing NVMe Controllers 00:10:13.881 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:10:13.881 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:10:13.881 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:10:13.881 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:10:13.881 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:10:13.881 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:10:13.881 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:10:13.881 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:10:13.881 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:10:13.881 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:10:13.881 Initialization complete. Launching workers. 00:10:13.881 ======================================================== 00:10:13.881 Latency(us) 00:10:13.881 Device Information : IOPS MiB/s Average min max 00:10:13.881 PCIE (0000:00:10.0) NSID 1 from core 2: 3169.08 12.38 5045.63 1058.41 22722.93 00:10:13.881 PCIE (0000:00:11.0) NSID 1 from core 2: 3169.08 12.38 5047.65 1074.81 16808.92 00:10:13.881 PCIE (0000:00:13.0) NSID 1 from core 2: 3169.08 12.38 5048.08 1091.41 18532.44 00:10:13.881 PCIE (0000:00:12.0) NSID 1 from core 2: 3165.88 12.37 5052.61 1087.84 19597.60 00:10:13.881 PCIE (0000:00:12.0) NSID 2 from core 2: 3169.08 12.38 5043.13 952.53 19692.93 00:10:13.881 PCIE (0000:00:12.0) NSID 3 from core 2: 3169.08 12.38 5043.32 919.15 22477.75 00:10:13.881 ======================================================== 00:10:13.881 Total : 19011.29 74.26 5046.74 919.15 22722.93 00:10:13.881 00:10:13.881 ************************************ 00:10:13.881 END TEST nvme_multi_secondary 00:10:13.881 ************************************ 00:10:13.881 18:00:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65243 00:10:13.881 18:00:30 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65244 00:10:13.881 00:10:13.881 real 0m10.759s 00:10:13.881 user 0m18.697s 00:10:13.881 sys 0m0.951s 00:10:13.881 18:00:30 nvme.nvme_multi_secondary -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:13.881 18:00:30 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:10:13.881 18:00:30 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:10:13.881 18:00:30 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:10:13.881 18:00:30 nvme -- common/autotest_common.sh@1091 -- # [[ -e /proc/64173 ]] 00:10:13.881 18:00:30 nvme -- common/autotest_common.sh@1092 -- # kill 64173 00:10:13.881 18:00:30 nvme -- common/autotest_common.sh@1093 -- # wait 64173 00:10:13.881 [2024-10-28 18:00:30.259757] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.881 [2024-10-28 18:00:30.259861] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.881 [2024-10-28 18:00:30.259907] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.881 [2024-10-28 18:00:30.259930] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.881 [2024-10-28 18:00:30.262683] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.881 [2024-10-28 18:00:30.262744] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.882 [2024-10-28 18:00:30.262766] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.882 [2024-10-28 18:00:30.262786] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.882 [2024-10-28 18:00:30.265654] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.882 [2024-10-28 18:00:30.265714] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.882 [2024-10-28 18:00:30.265736] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.882 [2024-10-28 18:00:30.265756] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.882 [2024-10-28 18:00:30.268457] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.882 [2024-10-28 18:00:30.268517] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.882 [2024-10-28 18:00:30.268538] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:13.882 [2024-10-28 18:00:30.268558] nvme_pcie_common.c: 311:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65110) is not found. Dropping the request. 00:10:14.139 [2024-10-28 18:00:30.464758] nvme_cuse.c:1023:cuse_thread: *NOTICE*: Cuse thread exited. 00:10:14.139 18:00:30 nvme -- common/autotest_common.sh@1095 -- # rm -f /var/run/spdk_stub0 00:10:14.139 18:00:30 nvme -- common/autotest_common.sh@1099 -- # echo 2 00:10:14.139 18:00:30 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:14.139 18:00:30 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:14.139 18:00:30 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:14.139 18:00:30 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:14.139 ************************************ 00:10:14.139 START TEST bdev_nvme_reset_stuck_adm_cmd 00:10:14.140 ************************************ 00:10:14.140 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:10:14.140 * Looking for test storage... 00:10:14.140 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:14.140 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:14.140 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lcov --version 00:10:14.140 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:14.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.399 --rc genhtml_branch_coverage=1 00:10:14.399 --rc genhtml_function_coverage=1 00:10:14.399 --rc genhtml_legend=1 00:10:14.399 --rc geninfo_all_blocks=1 00:10:14.399 --rc geninfo_unexecuted_blocks=1 00:10:14.399 00:10:14.399 ' 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:14.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.399 --rc genhtml_branch_coverage=1 00:10:14.399 --rc genhtml_function_coverage=1 00:10:14.399 --rc genhtml_legend=1 00:10:14.399 --rc geninfo_all_blocks=1 00:10:14.399 --rc geninfo_unexecuted_blocks=1 00:10:14.399 00:10:14.399 ' 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:14.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.399 --rc genhtml_branch_coverage=1 00:10:14.399 --rc genhtml_function_coverage=1 00:10:14.399 --rc genhtml_legend=1 00:10:14.399 --rc geninfo_all_blocks=1 00:10:14.399 --rc geninfo_unexecuted_blocks=1 00:10:14.399 00:10:14.399 ' 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:14.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:14.399 --rc genhtml_branch_coverage=1 00:10:14.399 --rc genhtml_function_coverage=1 00:10:14.399 --rc genhtml_legend=1 00:10:14.399 --rc geninfo_all_blocks=1 00:10:14.399 --rc geninfo_unexecuted_blocks=1 00:10:14.399 00:10:14.399 ' 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1507 -- # local bdfs 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1496 -- # local bdfs 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65400 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65400 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # '[' -z 65400 ']' 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.399 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:14.400 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.400 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:14.400 18:00:30 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:14.400 [2024-10-28 18:00:30.864136] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:10:14.400 [2024-10-28 18:00:30.864396] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65400 ] 00:10:14.658 [2024-10-28 18:00:31.077352] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:14.916 [2024-10-28 18:00:31.208831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.916 [2024-10-28 18:00:31.208897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:14.916 [2024-10-28 18:00:31.209034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.916 [2024-10-28 18:00:31.209049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@866 -- # return 0 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:15.849 nvme0n1 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_RF6Wm.txt 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:15.849 true 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1730138432 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65434 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:10:15.849 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:10:15.850 18:00:32 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:17.748 [2024-10-28 18:00:34.118958] nvme_ctrlr.c:1727:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:10:17.748 [2024-10-28 18:00:34.119406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:10:17.748 [2024-10-28 18:00:34.119467] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:10:17.748 [2024-10-28 18:00:34.119489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:17.748 [2024-10-28 18:00:34.121501] bdev_nvme.c:2250:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:10:17.748 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65434 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65434 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65434 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_RF6Wm.txt 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:17.748 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_RF6Wm.txt 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65400 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # '[' -z 65400 ']' 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # kill -0 65400 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # uname 00:10:18.006 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:18.007 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 65400 00:10:18.007 killing process with pid 65400 00:10:18.007 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:18.007 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:18.007 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 65400' 00:10:18.007 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@971 -- # kill 65400 00:10:18.007 18:00:34 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@976 -- # wait 65400 00:10:19.906 18:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:10:19.906 18:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:10:19.906 00:10:19.906 real 0m5.868s 00:10:19.906 user 0m20.675s 00:10:19.906 sys 0m0.634s 00:10:19.906 18:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:19.906 18:00:36 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:10:19.906 ************************************ 00:10:19.906 END TEST bdev_nvme_reset_stuck_adm_cmd 00:10:19.906 ************************************ 00:10:20.165 18:00:36 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:10:20.165 18:00:36 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:10:20.165 18:00:36 nvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:20.165 18:00:36 nvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:20.165 18:00:36 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:20.165 ************************************ 00:10:20.165 START TEST nvme_fio 00:10:20.165 ************************************ 00:10:20.165 18:00:36 nvme.nvme_fio -- common/autotest_common.sh@1127 -- # nvme_fio_test 00:10:20.165 18:00:36 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:10:20.165 18:00:36 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:10:20.165 18:00:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:10:20.165 18:00:36 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:20.165 18:00:36 nvme.nvme_fio -- common/autotest_common.sh@1496 -- # local bdfs 00:10:20.165 18:00:36 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:20.165 18:00:36 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:20.165 18:00:36 nvme.nvme_fio -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:20.165 18:00:36 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:20.165 18:00:36 nvme.nvme_fio -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:20.165 18:00:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:10:20.165 18:00:36 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:10:20.165 18:00:36 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:20.165 18:00:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:20.165 18:00:36 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:20.423 18:00:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:10:20.423 18:00:36 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:20.683 18:00:37 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:20.683 18:00:37 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:10:20.683 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:20.684 18:00:37 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:10:20.942 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:20.942 fio-3.35 00:10:20.942 Starting 1 thread 00:10:24.227 00:10:24.227 test: (groupid=0, jobs=1): err= 0: pid=65578: Mon Oct 28 18:00:40 2024 00:10:24.227 read: IOPS=15.2k, BW=59.4MiB/s (62.3MB/s)(119MiB/2001msec) 00:10:24.227 slat (usec): min=4, max=599, avg= 6.59, stdev= 4.58 00:10:24.227 clat (usec): min=393, max=9509, avg=4186.21, stdev=753.08 00:10:24.227 lat (usec): min=399, max=9600, avg=4192.80, stdev=753.97 00:10:24.227 clat percentiles (usec): 00:10:24.227 | 1.00th=[ 2507], 5.00th=[ 3261], 10.00th=[ 3458], 20.00th=[ 3654], 00:10:24.227 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4228], 60.00th=[ 4359], 00:10:24.227 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 5080], 00:10:24.227 | 99.00th=[ 7046], 99.50th=[ 7701], 99.90th=[ 8291], 99.95th=[ 8586], 00:10:24.227 | 99.99th=[ 9372] 00:10:24.227 bw ( KiB/s): min=59096, max=65552, per=100.00%, avg=62778.67, stdev=3322.67, samples=3 00:10:24.227 iops : min=14774, max=16388, avg=15694.67, stdev=830.67, samples=3 00:10:24.227 write: IOPS=15.2k, BW=59.5MiB/s (62.4MB/s)(119MiB/2001msec); 0 zone resets 00:10:24.227 slat (usec): min=4, max=376, avg= 6.63, stdev= 3.11 00:10:24.227 clat (usec): min=338, max=9434, avg=4189.36, stdev=751.91 00:10:24.227 lat (usec): min=345, max=9446, avg=4195.99, stdev=752.79 00:10:24.227 clat percentiles (usec): 00:10:24.227 | 1.00th=[ 2474], 5.00th=[ 3261], 10.00th=[ 3490], 20.00th=[ 3654], 00:10:24.227 | 30.00th=[ 3785], 40.00th=[ 3916], 50.00th=[ 4228], 60.00th=[ 4359], 00:10:24.227 | 70.00th=[ 4490], 80.00th=[ 4621], 90.00th=[ 4752], 95.00th=[ 5080], 00:10:24.227 | 99.00th=[ 6980], 99.50th=[ 7701], 99.90th=[ 8291], 99.95th=[ 8586], 00:10:24.227 | 99.99th=[ 9241] 00:10:24.227 bw ( KiB/s): min=58184, max=64936, per=100.00%, avg=62368.00, stdev=3654.58, samples=3 00:10:24.227 iops : min=14546, max=16234, avg=15592.00, stdev=913.65, samples=3 00:10:24.227 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:24.227 lat (msec) : 2=0.32%, 4=43.10%, 10=56.55% 00:10:24.227 cpu : usr=98.45%, sys=0.25%, ctx=20, majf=0, minf=607 00:10:24.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:24.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:24.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:24.227 issued rwts: total=30430,30491,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:24.227 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:24.227 00:10:24.227 Run status group 0 (all jobs): 00:10:24.227 READ: bw=59.4MiB/s (62.3MB/s), 59.4MiB/s-59.4MiB/s (62.3MB/s-62.3MB/s), io=119MiB (125MB), run=2001-2001msec 00:10:24.227 WRITE: bw=59.5MiB/s (62.4MB/s), 59.5MiB/s-59.5MiB/s (62.4MB/s-62.4MB/s), io=119MiB (125MB), run=2001-2001msec 00:10:24.227 ----------------------------------------------------- 00:10:24.227 Suppressions used: 00:10:24.227 count bytes template 00:10:24.227 1 32 /usr/src/fio/parse.c 00:10:24.227 1 8 libtcmalloc_minimal.so 00:10:24.227 ----------------------------------------------------- 00:10:24.227 00:10:24.227 18:00:40 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:24.227 18:00:40 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:24.227 18:00:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:24.227 18:00:40 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:24.485 18:00:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:24.485 18:00:40 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:10:24.743 18:00:41 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:24.743 18:00:41 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:24.744 18:00:41 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:10:25.002 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:25.002 fio-3.35 00:10:25.002 Starting 1 thread 00:10:28.286 00:10:28.286 test: (groupid=0, jobs=1): err= 0: pid=65640: Mon Oct 28 18:00:44 2024 00:10:28.286 read: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(124MiB/2001msec) 00:10:28.286 slat (nsec): min=4620, max=51079, avg=6281.57, stdev=1989.82 00:10:28.286 clat (usec): min=388, max=8257, avg=4008.34, stdev=626.47 00:10:28.286 lat (usec): min=394, max=8264, avg=4014.62, stdev=627.24 00:10:28.286 clat percentiles (usec): 00:10:28.286 | 1.00th=[ 2900], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3556], 00:10:28.286 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3818], 60.00th=[ 4178], 00:10:28.286 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4752], 00:10:28.286 | 99.00th=[ 6718], 99.50th=[ 7373], 99.90th=[ 7832], 99.95th=[ 7898], 00:10:28.286 | 99.99th=[ 8029] 00:10:28.286 bw ( KiB/s): min=58424, max=67640, per=97.34%, avg=61858.67, stdev=5036.25, samples=3 00:10:28.286 iops : min=14606, max=16910, avg=15464.67, stdev=1259.06, samples=3 00:10:28.286 write: IOPS=15.9k, BW=62.1MiB/s (65.1MB/s)(124MiB/2001msec); 0 zone resets 00:10:28.286 slat (nsec): min=4638, max=68038, avg=6315.43, stdev=1906.80 00:10:28.286 clat (usec): min=363, max=12019, avg=4013.56, stdev=672.86 00:10:28.286 lat (usec): min=369, max=12025, avg=4019.88, stdev=673.57 00:10:28.286 clat percentiles (usec): 00:10:28.286 | 1.00th=[ 2900], 5.00th=[ 3392], 10.00th=[ 3458], 20.00th=[ 3556], 00:10:28.286 | 30.00th=[ 3621], 40.00th=[ 3720], 50.00th=[ 3818], 60.00th=[ 4178], 00:10:28.286 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 4752], 00:10:28.286 | 99.00th=[ 6849], 99.50th=[ 7504], 99.90th=[11076], 99.95th=[11469], 00:10:28.286 | 99.99th=[11863] 00:10:28.286 bw ( KiB/s): min=58704, max=66920, per=96.58%, avg=61448.00, stdev=4738.90, samples=3 00:10:28.286 iops : min=14676, max=16730, avg=15362.00, stdev=1184.72, samples=3 00:10:28.286 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:28.286 lat (msec) : 2=0.14%, 4=54.71%, 10=45.06%, 20=0.06% 00:10:28.286 cpu : usr=98.90%, sys=0.10%, ctx=5, majf=0, minf=606 00:10:28.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:28.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:28.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:28.287 issued rwts: total=31789,31827,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:28.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:28.287 00:10:28.287 Run status group 0 (all jobs): 00:10:28.287 READ: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=124MiB (130MB), run=2001-2001msec 00:10:28.287 WRITE: bw=62.1MiB/s (65.1MB/s), 62.1MiB/s-62.1MiB/s (65.1MB/s-65.1MB/s), io=124MiB (130MB), run=2001-2001msec 00:10:28.287 ----------------------------------------------------- 00:10:28.287 Suppressions used: 00:10:28.287 count bytes template 00:10:28.287 1 32 /usr/src/fio/parse.c 00:10:28.287 1 8 libtcmalloc_minimal.so 00:10:28.287 ----------------------------------------------------- 00:10:28.287 00:10:28.287 18:00:44 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:28.287 18:00:44 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:28.287 18:00:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:28.287 18:00:44 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:28.545 18:00:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:10:28.545 18:00:44 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:28.803 18:00:45 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:28.803 18:00:45 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:28.803 18:00:45 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:10:29.061 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:29.061 fio-3.35 00:10:29.061 Starting 1 thread 00:10:32.353 00:10:32.353 test: (groupid=0, jobs=1): err= 0: pid=65701: Mon Oct 28 18:00:48 2024 00:10:32.353 read: IOPS=14.2k, BW=55.4MiB/s (58.1MB/s)(111MiB/2001msec) 00:10:32.353 slat (nsec): min=4640, max=87694, avg=7039.46, stdev=2478.42 00:10:32.353 clat (usec): min=284, max=9787, avg=4502.89, stdev=864.91 00:10:32.353 lat (usec): min=290, max=9793, avg=4509.93, stdev=865.90 00:10:32.353 clat percentiles (usec): 00:10:32.353 | 1.00th=[ 2769], 5.00th=[ 3425], 10.00th=[ 3654], 20.00th=[ 3949], 00:10:32.353 | 30.00th=[ 4293], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:10:32.353 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5014], 95.00th=[ 6259], 00:10:32.353 | 99.00th=[ 8160], 99.50th=[ 8455], 99.90th=[ 8848], 99.95th=[ 8979], 00:10:32.353 | 99.99th=[ 9634] 00:10:32.353 bw ( KiB/s): min=50976, max=57176, per=97.13%, avg=55106.67, stdev=3577.26, samples=3 00:10:32.353 iops : min=12744, max=14294, avg=13776.67, stdev=894.32, samples=3 00:10:32.353 write: IOPS=14.2k, BW=55.4MiB/s (58.1MB/s)(111MiB/2001msec); 0 zone resets 00:10:32.353 slat (nsec): min=4717, max=48359, avg=7115.55, stdev=2294.80 00:10:32.353 clat (usec): min=295, max=9918, avg=4491.18, stdev=855.70 00:10:32.353 lat (usec): min=301, max=9925, avg=4498.30, stdev=856.61 00:10:32.353 clat percentiles (usec): 00:10:32.353 | 1.00th=[ 2769], 5.00th=[ 3392], 10.00th=[ 3654], 20.00th=[ 3949], 00:10:32.353 | 30.00th=[ 4293], 40.00th=[ 4424], 50.00th=[ 4490], 60.00th=[ 4555], 00:10:32.353 | 70.00th=[ 4621], 80.00th=[ 4752], 90.00th=[ 5014], 95.00th=[ 6194], 00:10:32.353 | 99.00th=[ 8160], 99.50th=[ 8356], 99.90th=[ 8717], 99.95th=[ 9110], 00:10:32.353 | 99.99th=[ 9765] 00:10:32.353 bw ( KiB/s): min=51272, max=57344, per=97.27%, avg=55192.00, stdev=3400.24, samples=3 00:10:32.353 iops : min=12818, max=14336, avg=13798.00, stdev=850.06, samples=3 00:10:32.353 lat (usec) : 500=0.02%, 750=0.02%, 1000=0.01% 00:10:32.353 lat (msec) : 2=0.10%, 4=21.11%, 10=78.75% 00:10:32.353 cpu : usr=98.80%, sys=0.10%, ctx=15, majf=0, minf=607 00:10:32.353 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:32.353 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:32.353 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:32.353 issued rwts: total=28380,28385,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:32.353 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:32.353 00:10:32.353 Run status group 0 (all jobs): 00:10:32.353 READ: bw=55.4MiB/s (58.1MB/s), 55.4MiB/s-55.4MiB/s (58.1MB/s-58.1MB/s), io=111MiB (116MB), run=2001-2001msec 00:10:32.353 WRITE: bw=55.4MiB/s (58.1MB/s), 55.4MiB/s-55.4MiB/s (58.1MB/s-58.1MB/s), io=111MiB (116MB), run=2001-2001msec 00:10:32.353 ----------------------------------------------------- 00:10:32.353 Suppressions used: 00:10:32.353 count bytes template 00:10:32.353 1 32 /usr/src/fio/parse.c 00:10:32.353 1 8 libtcmalloc_minimal.so 00:10:32.353 ----------------------------------------------------- 00:10:32.353 00:10:32.353 18:00:48 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:32.353 18:00:48 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:10:32.353 18:00:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:32.353 18:00:48 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:10:32.612 18:00:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:10:32.612 18:00:49 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:10:33.177 18:00:49 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:10:33.178 18:00:49 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1362 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local sanitizers 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # shift 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # local asan_lib= 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # grep libasan 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # break 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:10:33.178 18:00:49 nvme.nvme_fio -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:10:33.178 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:10:33.178 fio-3.35 00:10:33.178 Starting 1 thread 00:10:38.441 00:10:38.441 test: (groupid=0, jobs=1): err= 0: pid=65762: Mon Oct 28 18:00:54 2024 00:10:38.441 read: IOPS=15.9k, BW=62.2MiB/s (65.2MB/s)(124MiB/2001msec) 00:10:38.441 slat (usec): min=4, max=114, avg= 6.20, stdev= 2.19 00:10:38.441 clat (usec): min=287, max=8862, avg=4008.77, stdev=683.08 00:10:38.441 lat (usec): min=294, max=8922, avg=4014.97, stdev=683.81 00:10:38.441 clat percentiles (usec): 00:10:38.441 | 1.00th=[ 2376], 5.00th=[ 3064], 10.00th=[ 3326], 20.00th=[ 3490], 00:10:38.441 | 30.00th=[ 3621], 40.00th=[ 3752], 50.00th=[ 4080], 60.00th=[ 4228], 00:10:38.441 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4883], 00:10:38.441 | 99.00th=[ 6521], 99.50th=[ 7308], 99.90th=[ 8094], 99.95th=[ 8160], 00:10:38.441 | 99.99th=[ 8717] 00:10:38.441 bw ( KiB/s): min=58184, max=71912, per=100.00%, avg=64533.33, stdev=6921.64, samples=3 00:10:38.441 iops : min=14546, max=17978, avg=16133.33, stdev=1730.41, samples=3 00:10:38.441 write: IOPS=15.9k, BW=62.2MiB/s (65.3MB/s)(125MiB/2001msec); 0 zone resets 00:10:38.441 slat (nsec): min=4656, max=59315, avg=6298.28, stdev=2103.04 00:10:38.441 clat (usec): min=324, max=8703, avg=4001.83, stdev=690.66 00:10:38.441 lat (usec): min=330, max=8710, avg=4008.13, stdev=691.38 00:10:38.441 clat percentiles (usec): 00:10:38.441 | 1.00th=[ 2376], 5.00th=[ 3064], 10.00th=[ 3326], 20.00th=[ 3490], 00:10:38.441 | 30.00th=[ 3589], 40.00th=[ 3752], 50.00th=[ 4080], 60.00th=[ 4228], 00:10:38.441 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 4883], 00:10:38.441 | 99.00th=[ 6587], 99.50th=[ 7504], 99.90th=[ 8225], 99.95th=[ 8455], 00:10:38.441 | 99.99th=[ 8586] 00:10:38.441 bw ( KiB/s): min=57528, max=71776, per=100.00%, avg=64269.33, stdev=7154.77, samples=3 00:10:38.441 iops : min=14382, max=17944, avg=16067.33, stdev=1788.69, samples=3 00:10:38.441 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:38.441 lat (msec) : 2=0.26%, 4=47.11%, 10=52.59% 00:10:38.441 cpu : usr=98.70%, sys=0.20%, ctx=16, majf=0, minf=604 00:10:38.441 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:10:38.441 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:38.441 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:38.441 issued rwts: total=31839,31878,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:38.441 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:38.441 00:10:38.441 Run status group 0 (all jobs): 00:10:38.441 READ: bw=62.2MiB/s (65.2MB/s), 62.2MiB/s-62.2MiB/s (65.2MB/s-65.2MB/s), io=124MiB (130MB), run=2001-2001msec 00:10:38.441 WRITE: bw=62.2MiB/s (65.3MB/s), 62.2MiB/s-62.2MiB/s (65.3MB/s-65.3MB/s), io=125MiB (131MB), run=2001-2001msec 00:10:38.442 ----------------------------------------------------- 00:10:38.442 Suppressions used: 00:10:38.442 count bytes template 00:10:38.442 1 32 /usr/src/fio/parse.c 00:10:38.442 1 8 libtcmalloc_minimal.so 00:10:38.442 ----------------------------------------------------- 00:10:38.442 00:10:38.699 18:00:54 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:10:38.699 18:00:54 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:10:38.699 00:10:38.699 real 0m18.527s 00:10:38.699 user 0m14.023s 00:10:38.699 sys 0m4.674s 00:10:38.699 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:38.699 18:00:54 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:10:38.699 ************************************ 00:10:38.699 END TEST nvme_fio 00:10:38.699 ************************************ 00:10:38.699 ************************************ 00:10:38.699 END TEST nvme 00:10:38.699 ************************************ 00:10:38.699 00:10:38.699 real 1m32.593s 00:10:38.699 user 3m46.945s 00:10:38.699 sys 0m16.916s 00:10:38.699 18:00:54 nvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:38.699 18:00:54 nvme -- common/autotest_common.sh@10 -- # set +x 00:10:38.699 18:00:55 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:10:38.699 18:00:55 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:38.699 18:00:55 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:38.699 18:00:55 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:38.699 18:00:55 -- common/autotest_common.sh@10 -- # set +x 00:10:38.699 ************************************ 00:10:38.699 START TEST nvme_scc 00:10:38.699 ************************************ 00:10:38.700 18:00:55 nvme_scc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:10:38.700 * Looking for test storage... 00:10:38.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:38.700 18:00:55 nvme_scc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:38.700 18:00:55 nvme_scc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:38.700 18:00:55 nvme_scc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:38.957 18:00:55 nvme_scc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@345 -- # : 1 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.957 18:00:55 nvme_scc -- scripts/common.sh@368 -- # return 0 00:10:38.957 18:00:55 nvme_scc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.957 18:00:55 nvme_scc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:38.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.957 --rc genhtml_branch_coverage=1 00:10:38.957 --rc genhtml_function_coverage=1 00:10:38.957 --rc genhtml_legend=1 00:10:38.957 --rc geninfo_all_blocks=1 00:10:38.957 --rc geninfo_unexecuted_blocks=1 00:10:38.957 00:10:38.957 ' 00:10:38.957 18:00:55 nvme_scc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:38.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.957 --rc genhtml_branch_coverage=1 00:10:38.957 --rc genhtml_function_coverage=1 00:10:38.957 --rc genhtml_legend=1 00:10:38.957 --rc geninfo_all_blocks=1 00:10:38.957 --rc geninfo_unexecuted_blocks=1 00:10:38.957 00:10:38.957 ' 00:10:38.957 18:00:55 nvme_scc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:38.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.957 --rc genhtml_branch_coverage=1 00:10:38.957 --rc genhtml_function_coverage=1 00:10:38.957 --rc genhtml_legend=1 00:10:38.957 --rc geninfo_all_blocks=1 00:10:38.957 --rc geninfo_unexecuted_blocks=1 00:10:38.957 00:10:38.957 ' 00:10:38.957 18:00:55 nvme_scc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:38.957 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.957 --rc genhtml_branch_coverage=1 00:10:38.957 --rc genhtml_function_coverage=1 00:10:38.957 --rc genhtml_legend=1 00:10:38.957 --rc geninfo_all_blocks=1 00:10:38.957 --rc geninfo_unexecuted_blocks=1 00:10:38.957 00:10:38.957 ' 00:10:38.957 18:00:55 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:38.958 18:00:55 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.958 18:00:55 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.958 18:00:55 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.958 18:00:55 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.958 18:00:55 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.958 18:00:55 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.958 18:00:55 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.958 18:00:55 nvme_scc -- paths/export.sh@5 -- # export PATH 00:10:38.958 18:00:55 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:38.958 18:00:55 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:10:38.958 18:00:55 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:38.958 18:00:55 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:10:38.958 18:00:55 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:10:38.958 18:00:55 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:10:38.958 18:00:55 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:39.215 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:39.473 Waiting for block devices as requested 00:10:39.473 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:39.473 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:39.473 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:39.731 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:45.000 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:45.000 18:01:01 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:45.000 18:01:01 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:45.000 18:01:01 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:45.000 18:01:01 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:45.000 18:01:01 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:45.000 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:45.001 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.002 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.003 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:45.004 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:45.005 18:01:01 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:45.005 18:01:01 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:45.005 18:01:01 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:45.005 18:01:01 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.005 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.006 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:45.007 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.008 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:45.009 18:01:01 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:45.009 18:01:01 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:45.009 18:01:01 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:45.009 18:01:01 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.009 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.010 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.011 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.012 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.281 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:45.282 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:45.283 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:45.284 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:45.285 18:01:01 nvme_scc -- scripts/common.sh@18 -- # local i 00:10:45.285 18:01:01 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:45.285 18:01:01 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:45.285 18:01:01 nvme_scc -- scripts/common.sh@27 -- # return 0 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@18 -- # shift 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.285 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.286 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:45.287 18:01:01 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:10:45.287 18:01:01 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:10:45.288 18:01:01 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:10:45.288 18:01:01 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:10:45.288 18:01:01 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:10:45.288 18:01:01 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:45.855 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:46.422 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.422 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.422 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.422 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:46.422 18:01:02 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:46.422 18:01:02 nvme_scc -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:46.422 18:01:02 nvme_scc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:46.422 18:01:02 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:46.422 ************************************ 00:10:46.422 START TEST nvme_simple_copy 00:10:46.422 ************************************ 00:10:46.422 18:01:02 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:10:46.987 Initializing NVMe Controllers 00:10:46.987 Attaching to 0000:00:10.0 00:10:46.987 Controller supports SCC. Attached to 0000:00:10.0 00:10:46.987 Namespace ID: 1 size: 6GB 00:10:46.987 Initialization complete. 00:10:46.987 00:10:46.987 Controller QEMU NVMe Ctrl (12340 ) 00:10:46.987 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:10:46.987 Namespace Block Size:4096 00:10:46.987 Writing LBAs 0 to 63 with Random Data 00:10:46.987 Copied LBAs from 0 - 63 to the Destination LBA 256 00:10:46.987 LBAs matching Written Data: 64 00:10:46.987 00:10:46.987 real 0m0.327s 00:10:46.987 user 0m0.134s 00:10:46.987 sys 0m0.090s 00:10:46.987 18:01:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:46.987 18:01:03 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:10:46.987 ************************************ 00:10:46.987 END TEST nvme_simple_copy 00:10:46.987 ************************************ 00:10:46.987 00:10:46.987 real 0m8.201s 00:10:46.987 user 0m1.457s 00:10:46.987 sys 0m1.679s 00:10:46.987 18:01:03 nvme_scc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:46.987 18:01:03 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:10:46.987 ************************************ 00:10:46.987 END TEST nvme_scc 00:10:46.987 ************************************ 00:10:46.987 18:01:03 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:10:46.987 18:01:03 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:10:46.987 18:01:03 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:10:46.987 18:01:03 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:10:46.987 18:01:03 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:10:46.987 18:01:03 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:46.987 18:01:03 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:46.988 18:01:03 -- common/autotest_common.sh@10 -- # set +x 00:10:46.988 ************************************ 00:10:46.988 START TEST nvme_fdp 00:10:46.988 ************************************ 00:10:46.988 18:01:03 nvme_fdp -- common/autotest_common.sh@1127 -- # test/nvme/nvme_fdp.sh 00:10:46.988 * Looking for test storage... 00:10:46.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:46.988 18:01:03 nvme_fdp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:46.988 18:01:03 nvme_fdp -- common/autotest_common.sh@1691 -- # lcov --version 00:10:46.988 18:01:03 nvme_fdp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:46.988 18:01:03 nvme_fdp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:46.988 18:01:03 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:10:47.250 18:01:03 nvme_fdp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:47.250 18:01:03 nvme_fdp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:47.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.250 --rc genhtml_branch_coverage=1 00:10:47.250 --rc genhtml_function_coverage=1 00:10:47.250 --rc genhtml_legend=1 00:10:47.250 --rc geninfo_all_blocks=1 00:10:47.250 --rc geninfo_unexecuted_blocks=1 00:10:47.250 00:10:47.250 ' 00:10:47.250 18:01:03 nvme_fdp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:47.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.250 --rc genhtml_branch_coverage=1 00:10:47.250 --rc genhtml_function_coverage=1 00:10:47.250 --rc genhtml_legend=1 00:10:47.250 --rc geninfo_all_blocks=1 00:10:47.250 --rc geninfo_unexecuted_blocks=1 00:10:47.250 00:10:47.250 ' 00:10:47.250 18:01:03 nvme_fdp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:47.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.250 --rc genhtml_branch_coverage=1 00:10:47.250 --rc genhtml_function_coverage=1 00:10:47.250 --rc genhtml_legend=1 00:10:47.250 --rc geninfo_all_blocks=1 00:10:47.250 --rc geninfo_unexecuted_blocks=1 00:10:47.250 00:10:47.250 ' 00:10:47.250 18:01:03 nvme_fdp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:47.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:47.250 --rc genhtml_branch_coverage=1 00:10:47.250 --rc genhtml_function_coverage=1 00:10:47.250 --rc genhtml_legend=1 00:10:47.250 --rc geninfo_all_blocks=1 00:10:47.250 --rc geninfo_unexecuted_blocks=1 00:10:47.250 00:10:47.250 ' 00:10:47.250 18:01:03 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:47.250 18:01:03 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:47.250 18:01:03 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.250 18:01:03 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.250 18:01:03 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.250 18:01:03 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:10:47.250 18:01:03 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:10:47.250 18:01:03 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:10:47.250 18:01:03 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:47.250 18:01:03 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:10:47.509 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:47.767 Waiting for block devices as requested 00:10:47.767 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.767 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:10:47.767 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:10:48.025 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:10:53.312 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:10:53.312 18:01:09 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:10:53.312 18:01:09 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:53.312 18:01:09 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:10:53.312 18:01:09 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:53.312 18:01:09 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.312 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:10:53.313 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.314 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:10:53.315 18:01:09 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.316 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.317 18:01:09 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:10:53.318 18:01:09 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:53.318 18:01:09 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:10:53.318 18:01:09 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:53.318 18:01:09 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.318 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.319 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:10:53.320 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.321 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.322 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:10:53.323 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:10:53.324 18:01:09 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:10:53.325 18:01:09 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:53.325 18:01:09 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:10:53.325 18:01:09 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:53.325 18:01:09 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.325 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.326 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.327 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:10:53.329 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.330 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.331 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:10:53.332 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:10:53.333 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:10:53.334 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/${ctrl##*/}n"* 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.335 18:01:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.597 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:10:53.598 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:10:53.599 18:01:09 nvme_fdp -- scripts/common.sh@18 -- # local i 00:10:53.599 18:01:09 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:10:53.599 18:01:09 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:10:53.599 18:01:09 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:10:53.599 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.600 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:10:53.601 18:01:09 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:10:53.602 18:01:09 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:10:53.602 18:01:09 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:10:53.602 18:01:09 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:10:53.602 18:01:09 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:10:53.602 18:01:09 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:10:54.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:10:54.750 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:10:54.750 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:10:54.750 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:10:54.750 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:10:54.750 18:01:11 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:54.750 18:01:11 nvme_fdp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:10:54.750 18:01:11 nvme_fdp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:54.750 18:01:11 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:54.750 ************************************ 00:10:54.750 START TEST nvme_flexible_data_placement 00:10:54.750 ************************************ 00:10:54.750 18:01:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:10:55.015 Initializing NVMe Controllers 00:10:55.015 Attaching to 0000:00:13.0 00:10:55.015 Controller supports FDP Attached to 0000:00:13.0 00:10:55.015 Namespace ID: 1 Endurance Group ID: 1 00:10:55.015 Initialization complete. 00:10:55.015 00:10:55.015 ================================== 00:10:55.015 == FDP tests for Namespace: #01 == 00:10:55.015 ================================== 00:10:55.015 00:10:55.015 Get Feature: FDP: 00:10:55.015 ================= 00:10:55.015 Enabled: Yes 00:10:55.015 FDP configuration Index: 0 00:10:55.015 00:10:55.015 FDP configurations log page 00:10:55.015 =========================== 00:10:55.015 Number of FDP configurations: 1 00:10:55.015 Version: 0 00:10:55.015 Size: 112 00:10:55.015 FDP Configuration Descriptor: 0 00:10:55.015 Descriptor Size: 96 00:10:55.015 Reclaim Group Identifier format: 2 00:10:55.015 FDP Volatile Write Cache: Not Present 00:10:55.015 FDP Configuration: Valid 00:10:55.015 Vendor Specific Size: 0 00:10:55.015 Number of Reclaim Groups: 2 00:10:55.015 Number of Recalim Unit Handles: 8 00:10:55.015 Max Placement Identifiers: 128 00:10:55.015 Number of Namespaces Suppprted: 256 00:10:55.015 Reclaim unit Nominal Size: 6000000 bytes 00:10:55.015 Estimated Reclaim Unit Time Limit: Not Reported 00:10:55.015 RUH Desc #000: RUH Type: Initially Isolated 00:10:55.015 RUH Desc #001: RUH Type: Initially Isolated 00:10:55.015 RUH Desc #002: RUH Type: Initially Isolated 00:10:55.015 RUH Desc #003: RUH Type: Initially Isolated 00:10:55.015 RUH Desc #004: RUH Type: Initially Isolated 00:10:55.015 RUH Desc #005: RUH Type: Initially Isolated 00:10:55.015 RUH Desc #006: RUH Type: Initially Isolated 00:10:55.015 RUH Desc #007: RUH Type: Initially Isolated 00:10:55.015 00:10:55.015 FDP reclaim unit handle usage log page 00:10:55.015 ====================================== 00:10:55.015 Number of Reclaim Unit Handles: 8 00:10:55.015 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:10:55.015 RUH Usage Desc #001: RUH Attributes: Unused 00:10:55.015 RUH Usage Desc #002: RUH Attributes: Unused 00:10:55.015 RUH Usage Desc #003: RUH Attributes: Unused 00:10:55.015 RUH Usage Desc #004: RUH Attributes: Unused 00:10:55.015 RUH Usage Desc #005: RUH Attributes: Unused 00:10:55.015 RUH Usage Desc #006: RUH Attributes: Unused 00:10:55.015 RUH Usage Desc #007: RUH Attributes: Unused 00:10:55.015 00:10:55.015 FDP statistics log page 00:10:55.015 ======================= 00:10:55.015 Host bytes with metadata written: 848367616 00:10:55.015 Media bytes with metadata written: 848613376 00:10:55.015 Media bytes erased: 0 00:10:55.015 00:10:55.015 FDP Reclaim unit handle status 00:10:55.015 ============================== 00:10:55.015 Number of RUHS descriptors: 2 00:10:55.015 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x00000000000036ef 00:10:55.015 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:10:55.015 00:10:55.015 FDP write on placement id: 0 success 00:10:55.015 00:10:55.015 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:10:55.015 00:10:55.015 IO mgmt send: RUH update for Placement ID: #0 Success 00:10:55.015 00:10:55.015 Get Feature: FDP Events for Placement handle: #0 00:10:55.015 ======================== 00:10:55.015 Number of FDP Events: 6 00:10:55.015 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:10:55.015 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:10:55.015 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:10:55.015 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:10:55.015 FDP Event: #4 Type: Media Reallocated Enabled: No 00:10:55.015 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:10:55.015 00:10:55.015 FDP events log page 00:10:55.015 =================== 00:10:55.015 Number of FDP events: 1 00:10:55.015 FDP Event #0: 00:10:55.015 Event Type: RU Not Written to Capacity 00:10:55.015 Placement Identifier: Valid 00:10:55.016 NSID: Valid 00:10:55.016 Location: Valid 00:10:55.016 Placement Identifier: 0 00:10:55.016 Event Timestamp: 9 00:10:55.016 Namespace Identifier: 1 00:10:55.016 Reclaim Group Identifier: 0 00:10:55.016 Reclaim Unit Handle Identifier: 0 00:10:55.016 00:10:55.016 FDP test passed 00:10:55.016 00:10:55.016 real 0m0.295s 00:10:55.016 user 0m0.112s 00:10:55.016 sys 0m0.082s 00:10:55.016 18:01:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.016 18:01:11 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:10:55.016 ************************************ 00:10:55.016 END TEST nvme_flexible_data_placement 00:10:55.016 ************************************ 00:10:55.016 ************************************ 00:10:55.016 END TEST nvme_fdp 00:10:55.016 ************************************ 00:10:55.016 00:10:55.016 real 0m8.217s 00:10:55.016 user 0m1.489s 00:10:55.016 sys 0m1.683s 00:10:55.016 18:01:11 nvme_fdp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:55.016 18:01:11 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:10:55.275 18:01:11 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:10:55.276 18:01:11 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:55.276 18:01:11 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:55.276 18:01:11 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:55.276 18:01:11 -- common/autotest_common.sh@10 -- # set +x 00:10:55.276 ************************************ 00:10:55.276 START TEST nvme_rpc 00:10:55.276 ************************************ 00:10:55.276 18:01:11 nvme_rpc -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:10:55.276 * Looking for test storage... 00:10:55.276 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:55.276 18:01:11 nvme_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:55.276 18:01:11 nvme_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:10:55.276 18:01:11 nvme_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:55.276 18:01:11 nvme_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:55.276 18:01:11 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:10:55.534 18:01:11 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:10:55.535 18:01:11 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:10:55.535 18:01:11 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:10:55.535 18:01:11 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:55.535 18:01:11 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:10:55.535 18:01:11 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:10:55.535 18:01:11 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:55.535 18:01:11 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:55.535 18:01:11 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:55.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.535 --rc genhtml_branch_coverage=1 00:10:55.535 --rc genhtml_function_coverage=1 00:10:55.535 --rc genhtml_legend=1 00:10:55.535 --rc geninfo_all_blocks=1 00:10:55.535 --rc geninfo_unexecuted_blocks=1 00:10:55.535 00:10:55.535 ' 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:55.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.535 --rc genhtml_branch_coverage=1 00:10:55.535 --rc genhtml_function_coverage=1 00:10:55.535 --rc genhtml_legend=1 00:10:55.535 --rc geninfo_all_blocks=1 00:10:55.535 --rc geninfo_unexecuted_blocks=1 00:10:55.535 00:10:55.535 ' 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:55.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.535 --rc genhtml_branch_coverage=1 00:10:55.535 --rc genhtml_function_coverage=1 00:10:55.535 --rc genhtml_legend=1 00:10:55.535 --rc geninfo_all_blocks=1 00:10:55.535 --rc geninfo_unexecuted_blocks=1 00:10:55.535 00:10:55.535 ' 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:55.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:55.535 --rc genhtml_branch_coverage=1 00:10:55.535 --rc genhtml_function_coverage=1 00:10:55.535 --rc genhtml_legend=1 00:10:55.535 --rc geninfo_all_blocks=1 00:10:55.535 --rc geninfo_unexecuted_blocks=1 00:10:55.535 00:10:55.535 ' 00:10:55.535 18:01:11 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:55.535 18:01:11 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1507 -- # bdfs=() 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1507 -- # local bdfs 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1496 -- # bdfs=() 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1496 -- # local bdfs 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1498 -- # (( 4 == 0 )) 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@1510 -- # echo 0000:00:10.0 00:10:55.535 18:01:11 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:10:55.535 18:01:11 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67131 00:10:55.535 18:01:11 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:55.535 18:01:11 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:10:55.535 18:01:11 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67131 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@833 -- # '[' -z 67131 ']' 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:55.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:55.535 18:01:11 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:55.535 [2024-10-28 18:01:11.948487] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:10:55.535 [2024-10-28 18:01:11.948682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67131 ] 00:10:55.794 [2024-10-28 18:01:12.140182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:55.794 [2024-10-28 18:01:12.267296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.794 [2024-10-28 18:01:12.267296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.730 18:01:13 nvme_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:56.730 18:01:13 nvme_rpc -- common/autotest_common.sh@866 -- # return 0 00:10:56.730 18:01:13 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:10:56.988 Nvme0n1 00:10:56.989 18:01:13 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:10:56.989 18:01:13 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:10:57.246 request: 00:10:57.246 { 00:10:57.246 "bdev_name": "Nvme0n1", 00:10:57.246 "filename": "non_existing_file", 00:10:57.246 "method": "bdev_nvme_apply_firmware", 00:10:57.246 "req_id": 1 00:10:57.246 } 00:10:57.246 Got JSON-RPC error response 00:10:57.246 response: 00:10:57.246 { 00:10:57.246 "code": -32603, 00:10:57.246 "message": "open file failed." 00:10:57.246 } 00:10:57.246 18:01:13 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:10:57.246 18:01:13 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:10:57.246 18:01:13 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:10:57.813 18:01:14 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:57.813 18:01:14 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67131 00:10:57.813 18:01:14 nvme_rpc -- common/autotest_common.sh@952 -- # '[' -z 67131 ']' 00:10:57.813 18:01:14 nvme_rpc -- common/autotest_common.sh@956 -- # kill -0 67131 00:10:57.813 18:01:14 nvme_rpc -- common/autotest_common.sh@957 -- # uname 00:10:57.813 18:01:14 nvme_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:57.813 18:01:14 nvme_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67131 00:10:57.813 18:01:14 nvme_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:57.813 killing process with pid 67131 00:10:57.813 18:01:14 nvme_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:57.813 18:01:14 nvme_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67131' 00:10:57.813 18:01:14 nvme_rpc -- common/autotest_common.sh@971 -- # kill 67131 00:10:57.813 18:01:14 nvme_rpc -- common/autotest_common.sh@976 -- # wait 67131 00:10:59.712 00:10:59.712 real 0m4.489s 00:10:59.712 user 0m8.787s 00:10:59.712 sys 0m0.620s 00:10:59.712 18:01:16 nvme_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:59.712 ************************************ 00:10:59.712 END TEST nvme_rpc 00:10:59.712 ************************************ 00:10:59.712 18:01:16 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:59.712 18:01:16 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:59.712 18:01:16 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:59.712 18:01:16 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:59.712 18:01:16 -- common/autotest_common.sh@10 -- # set +x 00:10:59.712 ************************************ 00:10:59.712 START TEST nvme_rpc_timeouts 00:10:59.712 ************************************ 00:10:59.712 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:10:59.712 * Looking for test storage... 00:10:59.712 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:10:59.712 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:59.712 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lcov --version 00:10:59.712 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:59.971 18:01:16 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:59.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.971 --rc genhtml_branch_coverage=1 00:10:59.971 --rc genhtml_function_coverage=1 00:10:59.971 --rc genhtml_legend=1 00:10:59.971 --rc geninfo_all_blocks=1 00:10:59.971 --rc geninfo_unexecuted_blocks=1 00:10:59.971 00:10:59.971 ' 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:59.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.971 --rc genhtml_branch_coverage=1 00:10:59.971 --rc genhtml_function_coverage=1 00:10:59.971 --rc genhtml_legend=1 00:10:59.971 --rc geninfo_all_blocks=1 00:10:59.971 --rc geninfo_unexecuted_blocks=1 00:10:59.971 00:10:59.971 ' 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:59.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.971 --rc genhtml_branch_coverage=1 00:10:59.971 --rc genhtml_function_coverage=1 00:10:59.971 --rc genhtml_legend=1 00:10:59.971 --rc geninfo_all_blocks=1 00:10:59.971 --rc geninfo_unexecuted_blocks=1 00:10:59.971 00:10:59.971 ' 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:59.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:59.971 --rc genhtml_branch_coverage=1 00:10:59.971 --rc genhtml_function_coverage=1 00:10:59.971 --rc genhtml_legend=1 00:10:59.971 --rc geninfo_all_blocks=1 00:10:59.971 --rc geninfo_unexecuted_blocks=1 00:10:59.971 00:10:59.971 ' 00:10:59.971 18:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:59.971 18:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67212 00:10:59.971 18:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67212 00:10:59.971 18:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67245 00:10:59.971 18:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:10:59.971 18:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:10:59.971 18:01:16 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67245 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # '[' -z 67245 ']' 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:59.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:59.971 18:01:16 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:10:59.971 [2024-10-28 18:01:16.352265] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:10:59.971 [2024-10-28 18:01:16.352454] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67245 ] 00:11:00.229 [2024-10-28 18:01:16.530780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:11:00.488 [2024-10-28 18:01:16.723455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:00.488 [2024-10-28 18:01:16.723466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:01.054 18:01:17 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:01.054 18:01:17 nvme_rpc_timeouts -- common/autotest_common.sh@866 -- # return 0 00:11:01.054 Checking default timeout settings: 00:11:01.054 18:01:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:11:01.054 18:01:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:01.783 Making settings changes with rpc: 00:11:01.783 18:01:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:11:01.783 18:01:17 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:11:01.783 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:11:01.783 Check default vs. modified settings: 00:11:01.783 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67212 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67212 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:11:02.349 Setting action_on_timeout is changed as expected. 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67212 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:02.349 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67212 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:02.350 Setting timeout_us is changed as expected. 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67212 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67212 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:11:02.350 Setting timeout_admin_us is changed as expected. 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67212 /tmp/settings_modified_67212 00:11:02.350 18:01:18 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67245 00:11:02.350 18:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # '[' -z 67245 ']' 00:11:02.350 18:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # kill -0 67245 00:11:02.350 18:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # uname 00:11:02.350 18:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:02.350 18:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 67245 00:11:02.350 killing process with pid 67245 00:11:02.350 18:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:02.350 18:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:02.350 18:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@970 -- # echo 'killing process with pid 67245' 00:11:02.350 18:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@971 -- # kill 67245 00:11:02.350 18:01:18 nvme_rpc_timeouts -- common/autotest_common.sh@976 -- # wait 67245 00:11:04.879 RPC TIMEOUT SETTING TEST PASSED. 00:11:04.879 18:01:20 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:11:04.879 00:11:04.879 real 0m4.716s 00:11:04.879 user 0m9.310s 00:11:04.879 sys 0m0.608s 00:11:04.879 ************************************ 00:11:04.879 END TEST nvme_rpc_timeouts 00:11:04.879 ************************************ 00:11:04.879 18:01:20 nvme_rpc_timeouts -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:04.879 18:01:20 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:11:04.879 18:01:20 -- spdk/autotest.sh@239 -- # uname -s 00:11:04.879 18:01:20 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:11:04.879 18:01:20 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:04.879 18:01:20 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:11:04.879 18:01:20 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:04.879 18:01:20 -- common/autotest_common.sh@10 -- # set +x 00:11:04.879 ************************************ 00:11:04.879 START TEST sw_hotplug 00:11:04.879 ************************************ 00:11:04.879 18:01:20 sw_hotplug -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:11:04.879 * Looking for test storage... 00:11:04.879 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:11:04.879 18:01:20 sw_hotplug -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:04.879 18:01:20 sw_hotplug -- common/autotest_common.sh@1691 -- # lcov --version 00:11:04.879 18:01:20 sw_hotplug -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:04.879 18:01:21 sw_hotplug -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:04.879 18:01:21 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:11:04.879 18:01:21 sw_hotplug -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:04.879 18:01:21 sw_hotplug -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.879 --rc genhtml_branch_coverage=1 00:11:04.879 --rc genhtml_function_coverage=1 00:11:04.879 --rc genhtml_legend=1 00:11:04.879 --rc geninfo_all_blocks=1 00:11:04.879 --rc geninfo_unexecuted_blocks=1 00:11:04.879 00:11:04.879 ' 00:11:04.879 18:01:21 sw_hotplug -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.879 --rc genhtml_branch_coverage=1 00:11:04.879 --rc genhtml_function_coverage=1 00:11:04.879 --rc genhtml_legend=1 00:11:04.879 --rc geninfo_all_blocks=1 00:11:04.879 --rc geninfo_unexecuted_blocks=1 00:11:04.879 00:11:04.879 ' 00:11:04.879 18:01:21 sw_hotplug -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.879 --rc genhtml_branch_coverage=1 00:11:04.879 --rc genhtml_function_coverage=1 00:11:04.879 --rc genhtml_legend=1 00:11:04.879 --rc geninfo_all_blocks=1 00:11:04.879 --rc geninfo_unexecuted_blocks=1 00:11:04.879 00:11:04.879 ' 00:11:04.879 18:01:21 sw_hotplug -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:04.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:04.879 --rc genhtml_branch_coverage=1 00:11:04.879 --rc genhtml_function_coverage=1 00:11:04.879 --rc genhtml_legend=1 00:11:04.879 --rc geninfo_all_blocks=1 00:11:04.879 --rc geninfo_unexecuted_blocks=1 00:11:04.879 00:11:04.879 ' 00:11:04.879 18:01:21 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:05.138 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:05.138 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:05.138 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:05.138 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:05.138 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:05.138 18:01:21 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:11:05.138 18:01:21 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:11:05.138 18:01:21 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:11:05.138 18:01:21 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@233 -- # local class 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:05.138 18:01:21 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@18 -- # local i 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:05.139 18:01:21 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:05.398 18:01:21 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:05.398 18:01:21 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:11:05.398 18:01:21 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:11:05.398 18:01:21 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:11:05.398 18:01:21 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:11:05.398 18:01:21 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:11:05.398 18:01:21 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:11:05.398 18:01:21 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:11:05.398 18:01:21 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:11:05.398 18:01:21 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:11:05.398 18:01:21 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:05.657 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:05.657 Waiting for block devices as requested 00:11:05.916 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:05.916 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:05.916 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:06.174 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:11.469 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:11.469 18:01:27 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:11:11.469 18:01:27 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:11.469 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:11:11.728 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:11.728 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:11:11.985 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:11:12.243 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:11:12.243 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:11:12.243 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:11:12.243 18:01:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68116 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:11:12.501 18:01:28 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:11:12.501 18:01:28 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:11:12.501 18:01:28 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:11:12.501 18:01:28 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:11:12.501 18:01:28 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 false 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:12.501 18:01:28 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:12.759 Initializing NVMe Controllers 00:11:12.759 Attaching to 0000:00:10.0 00:11:12.759 Attaching to 0000:00:11.0 00:11:12.759 Attached to 0000:00:10.0 00:11:12.759 Attached to 0000:00:11.0 00:11:12.759 Initialization complete. Starting I/O... 00:11:12.759 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:12.759 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:11:12.759 00:11:13.692 QEMU NVMe Ctrl (12340 ): 1056 I/Os completed (+1056) 00:11:13.692 QEMU NVMe Ctrl (12341 ): 1182 I/Os completed (+1182) 00:11:13.692 00:11:14.627 QEMU NVMe Ctrl (12340 ): 2336 I/Os completed (+1280) 00:11:14.627 QEMU NVMe Ctrl (12341 ): 2547 I/Os completed (+1365) 00:11:14.627 00:11:15.571 QEMU NVMe Ctrl (12340 ): 4104 I/Os completed (+1768) 00:11:15.571 QEMU NVMe Ctrl (12341 ): 4334 I/Os completed (+1787) 00:11:15.571 00:11:16.946 QEMU NVMe Ctrl (12340 ): 5812 I/Os completed (+1708) 00:11:16.946 QEMU NVMe Ctrl (12341 ): 6133 I/Os completed (+1799) 00:11:16.946 00:11:17.883 QEMU NVMe Ctrl (12340 ): 7640 I/Os completed (+1828) 00:11:17.883 QEMU NVMe Ctrl (12341 ): 7990 I/Os completed (+1857) 00:11:17.883 00:11:18.449 18:01:34 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:18.449 18:01:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.449 18:01:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.449 [2024-10-28 18:01:34.775656] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:18.449 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:18.449 [2024-10-28 18:01:34.778029] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.778108] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.778147] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.778177] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:18.449 [2024-10-28 18:01:34.781698] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.781773] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.781802] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.781828] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 18:01:34 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:18.449 18:01:34 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:18.449 [2024-10-28 18:01:34.802413] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:18.449 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:18.449 [2024-10-28 18:01:34.804666] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.804766] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.804804] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.804850] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:18.449 [2024-10-28 18:01:34.807975] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.808041] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.808072] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 [2024-10-28 18:01:34.808095] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:18.449 18:01:34 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:18.449 18:01:34 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:18.707 18:01:34 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.707 18:01:34 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.707 18:01:34 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:18.707 18:01:34 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:18.707 18:01:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:18.707 18:01:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:18.707 18:01:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:18.707 18:01:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:18.707 Attaching to 0000:00:10.0 00:11:18.707 Attached to 0000:00:10.0 00:11:18.707 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:11:18.707 00:11:18.707 18:01:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:18.707 18:01:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:18.707 18:01:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:18.707 Attaching to 0000:00:11.0 00:11:18.707 Attached to 0000:00:11.0 00:11:19.643 QEMU NVMe Ctrl (12340 ): 1655 I/Os completed (+1655) 00:11:19.643 QEMU NVMe Ctrl (12341 ): 1561 I/Os completed (+1561) 00:11:19.643 00:11:20.577 QEMU NVMe Ctrl (12340 ): 3477 I/Os completed (+1822) 00:11:20.577 QEMU NVMe Ctrl (12341 ): 3410 I/Os completed (+1849) 00:11:20.577 00:11:21.955 QEMU NVMe Ctrl (12340 ): 5056 I/Os completed (+1579) 00:11:21.955 QEMU NVMe Ctrl (12341 ): 5062 I/Os completed (+1652) 00:11:21.955 00:11:22.890 QEMU NVMe Ctrl (12340 ): 6603 I/Os completed (+1547) 00:11:22.890 QEMU NVMe Ctrl (12341 ): 6731 I/Os completed (+1669) 00:11:22.890 00:11:23.823 QEMU NVMe Ctrl (12340 ): 8136 I/Os completed (+1533) 00:11:23.824 QEMU NVMe Ctrl (12341 ): 8445 I/Os completed (+1714) 00:11:23.824 00:11:24.758 QEMU NVMe Ctrl (12340 ): 9768 I/Os completed (+1632) 00:11:24.758 QEMU NVMe Ctrl (12341 ): 10174 I/Os completed (+1729) 00:11:24.758 00:11:25.692 QEMU NVMe Ctrl (12340 ): 11465 I/Os completed (+1697) 00:11:25.692 QEMU NVMe Ctrl (12341 ): 11915 I/Os completed (+1741) 00:11:25.692 00:11:26.626 QEMU NVMe Ctrl (12340 ): 13221 I/Os completed (+1756) 00:11:26.626 QEMU NVMe Ctrl (12341 ): 13707 I/Os completed (+1792) 00:11:26.626 00:11:27.561 QEMU NVMe Ctrl (12340 ): 14849 I/Os completed (+1628) 00:11:27.561 QEMU NVMe Ctrl (12341 ): 15451 I/Os completed (+1744) 00:11:27.561 00:11:28.933 QEMU NVMe Ctrl (12340 ): 16617 I/Os completed (+1768) 00:11:28.933 QEMU NVMe Ctrl (12341 ): 17255 I/Os completed (+1804) 00:11:28.933 00:11:29.865 QEMU NVMe Ctrl (12340 ): 18354 I/Os completed (+1737) 00:11:29.865 QEMU NVMe Ctrl (12341 ): 19057 I/Os completed (+1802) 00:11:29.865 00:11:30.797 QEMU NVMe Ctrl (12340 ): 20037 I/Os completed (+1683) 00:11:30.797 QEMU NVMe Ctrl (12341 ): 20797 I/Os completed (+1740) 00:11:30.797 00:11:30.797 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:30.797 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:30.797 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.797 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.797 [2024-10-28 18:01:47.113206] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:30.797 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:30.797 [2024-10-28 18:01:47.115170] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.797 [2024-10-28 18:01:47.115241] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.797 [2024-10-28 18:01:47.115269] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.797 [2024-10-28 18:01:47.115310] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.797 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:30.797 [2024-10-28 18:01:47.118508] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.797 [2024-10-28 18:01:47.118573] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.797 [2024-10-28 18:01:47.118597] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.798 [2024-10-28 18:01:47.118620] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.798 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:30.798 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:30.798 [2024-10-28 18:01:47.136969] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:30.798 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:30.798 [2024-10-28 18:01:47.138785] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.798 [2024-10-28 18:01:47.138859] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.798 [2024-10-28 18:01:47.138896] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.798 [2024-10-28 18:01:47.138920] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.798 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:30.798 [2024-10-28 18:01:47.141587] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.798 [2024-10-28 18:01:47.141642] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.798 [2024-10-28 18:01:47.141668] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.798 [2024-10-28 18:01:47.141692] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:30.798 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:30.798 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:30.798 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:30.798 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:30.798 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:31.067 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:31.067 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.067 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:31.067 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:31.067 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:31.067 Attaching to 0000:00:10.0 00:11:31.067 Attached to 0000:00:10.0 00:11:31.067 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:31.067 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:31.067 18:01:47 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:31.067 Attaching to 0000:00:11.0 00:11:31.067 Attached to 0000:00:11.0 00:11:31.647 QEMU NVMe Ctrl (12340 ): 1262 I/Os completed (+1262) 00:11:31.647 QEMU NVMe Ctrl (12341 ): 1103 I/Os completed (+1103) 00:11:31.647 00:11:32.578 QEMU NVMe Ctrl (12340 ): 2911 I/Os completed (+1649) 00:11:32.578 QEMU NVMe Ctrl (12341 ): 2793 I/Os completed (+1690) 00:11:32.578 00:11:33.951 QEMU NVMe Ctrl (12340 ): 4401 I/Os completed (+1490) 00:11:33.951 QEMU NVMe Ctrl (12341 ): 4433 I/Os completed (+1640) 00:11:33.951 00:11:34.884 QEMU NVMe Ctrl (12340 ): 5996 I/Os completed (+1595) 00:11:34.884 QEMU NVMe Ctrl (12341 ): 6205 I/Os completed (+1772) 00:11:34.884 00:11:35.818 QEMU NVMe Ctrl (12340 ): 7547 I/Os completed (+1551) 00:11:35.818 QEMU NVMe Ctrl (12341 ): 7844 I/Os completed (+1639) 00:11:35.818 00:11:36.752 QEMU NVMe Ctrl (12340 ): 9232 I/Os completed (+1685) 00:11:36.752 QEMU NVMe Ctrl (12341 ): 9613 I/Os completed (+1769) 00:11:36.752 00:11:37.686 QEMU NVMe Ctrl (12340 ): 10884 I/Os completed (+1652) 00:11:37.686 QEMU NVMe Ctrl (12341 ): 11499 I/Os completed (+1886) 00:11:37.686 00:11:38.621 QEMU NVMe Ctrl (12340 ): 12564 I/Os completed (+1680) 00:11:38.621 QEMU NVMe Ctrl (12341 ): 13310 I/Os completed (+1811) 00:11:38.621 00:11:39.555 QEMU NVMe Ctrl (12340 ): 14159 I/Os completed (+1595) 00:11:39.555 QEMU NVMe Ctrl (12341 ): 15024 I/Os completed (+1714) 00:11:39.555 00:11:40.928 QEMU NVMe Ctrl (12340 ): 15967 I/Os completed (+1808) 00:11:40.928 QEMU NVMe Ctrl (12341 ): 16864 I/Os completed (+1840) 00:11:40.928 00:11:41.863 QEMU NVMe Ctrl (12340 ): 17779 I/Os completed (+1812) 00:11:41.863 QEMU NVMe Ctrl (12341 ): 18724 I/Os completed (+1860) 00:11:41.863 00:11:42.797 QEMU NVMe Ctrl (12340 ): 19459 I/Os completed (+1680) 00:11:42.797 QEMU NVMe Ctrl (12341 ): 20535 I/Os completed (+1811) 00:11:42.797 00:11:43.055 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:43.055 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:43.055 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.055 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.055 [2024-10-28 18:01:59.399962] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:43.055 Controller removed: QEMU NVMe Ctrl (12340 ) 00:11:43.055 [2024-10-28 18:01:59.401932] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.402005] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.402035] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.402060] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:43.055 [2024-10-28 18:01:59.405032] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.405093] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.405122] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.405144] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:43.055 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:43.055 [2024-10-28 18:01:59.428859] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:43.055 Controller removed: QEMU NVMe Ctrl (12341 ) 00:11:43.055 [2024-10-28 18:01:59.430600] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.430662] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.430691] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.430715] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:43.055 [2024-10-28 18:01:59.433286] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.433341] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.433369] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 [2024-10-28 18:01:59.433388] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:43.055 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:11:43.055 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:43.055 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:11:43.055 EAL: Scan for (pci) bus failed. 00:11:43.315 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.315 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.315 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:43.315 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:43.315 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.315 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:43.315 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:43.315 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:43.315 Attaching to 0000:00:10.0 00:11:43.315 Attached to 0000:00:10.0 00:11:43.315 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:43.315 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:43.315 18:01:59 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:43.315 Attaching to 0000:00:11.0 00:11:43.315 Attached to 0000:00:11.0 00:11:43.315 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:11:43.315 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:11:43.315 [2024-10-28 18:01:59.728823] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:11:55.510 18:02:11 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:11:55.510 18:02:11 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:55.510 18:02:11 sw_hotplug -- common/autotest_common.sh@717 -- # time=42.94 00:11:55.510 18:02:11 sw_hotplug -- common/autotest_common.sh@718 -- # echo 42.94 00:11:55.510 18:02:11 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:11:55.510 18:02:11 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.94 00:11:55.510 18:02:11 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.94 2 00:11:55.510 remove_attach_helper took 42.94s to complete (handling 2 nvme drive(s)) 18:02:11 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:12:02.063 18:02:17 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68116 00:12:02.063 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68116) - No such process 00:12:02.063 18:02:17 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68116 00:12:02.063 18:02:17 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:12:02.063 18:02:17 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:12:02.063 18:02:17 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:12:02.063 18:02:17 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=68666 00:12:02.063 18:02:17 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:02.063 18:02:17 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:12:02.063 18:02:17 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 68666 00:12:02.063 18:02:17 sw_hotplug -- common/autotest_common.sh@833 -- # '[' -z 68666 ']' 00:12:02.063 18:02:17 sw_hotplug -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.063 18:02:17 sw_hotplug -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:02.063 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.063 18:02:17 sw_hotplug -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.063 18:02:17 sw_hotplug -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:02.063 18:02:17 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.063 [2024-10-28 18:02:17.848454] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:12:02.063 [2024-10-28 18:02:17.848655] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68666 ] 00:12:02.063 [2024-10-28 18:02:18.037083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.063 [2024-10-28 18:02:18.140555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.629 18:02:18 sw_hotplug -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:02.629 18:02:18 sw_hotplug -- common/autotest_common.sh@866 -- # return 0 00:12:02.629 18:02:18 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:02.629 18:02:18 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.629 18:02:18 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:02.629 18:02:18 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.629 18:02:18 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:12:02.629 18:02:18 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:02.629 18:02:18 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:02.629 18:02:18 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:02.629 18:02:18 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:02.629 18:02:18 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:02.629 18:02:18 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:02.629 18:02:18 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:02.629 18:02:18 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:02.629 18:02:18 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:02.629 18:02:18 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:02.629 18:02:18 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:02.629 18:02:18 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:09.228 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:09.228 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:09.228 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:09.228 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:09.228 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:09.228 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:09.228 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:09.228 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:09.228 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.228 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.228 18:02:24 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.228 18:02:24 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.228 18:02:24 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.228 [2024-10-28 18:02:24.987212] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:09.228 [2024-10-28 18:02:24.990078] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.228 [2024-10-28 18:02:24.990140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.228 [2024-10-28 18:02:24.990166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.228 [2024-10-28 18:02:24.990196] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.228 [2024-10-28 18:02:24.990212] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.228 [2024-10-28 18:02:24.990228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.228 [2024-10-28 18:02:24.990243] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.228 [2024-10-28 18:02:24.990259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.228 [2024-10-28 18:02:24.990272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.228 [2024-10-28 18:02:24.990293] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.228 [2024-10-28 18:02:24.990307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.228 [2024-10-28 18:02:24.990322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.228 18:02:24 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:09.228 [2024-10-28 18:02:25.387205] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:09.228 [2024-10-28 18:02:25.389813] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.228 [2024-10-28 18:02:25.389901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.228 [2024-10-28 18:02:25.389925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.228 [2024-10-28 18:02:25.389950] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.228 [2024-10-28 18:02:25.389967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.228 [2024-10-28 18:02:25.389980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.228 [2024-10-28 18:02:25.390005] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.228 [2024-10-28 18:02:25.390017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.228 [2024-10-28 18:02:25.390031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.228 [2024-10-28 18:02:25.390043] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:09.228 [2024-10-28 18:02:25.390088] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:09.228 [2024-10-28 18:02:25.390117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:09.228 18:02:25 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.228 18:02:25 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:09.228 18:02:25 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:09.228 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:09.486 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:09.486 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:09.486 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:09.486 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:09.486 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:09.486 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:09.486 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:09.486 18:02:25 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.743 18:02:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.743 18:02:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 18:02:37 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:21.743 18:02:37 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:21.743 18:02:37 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.743 18:02:37 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:21.743 [2024-10-28 18:02:37.987396] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:21.743 [2024-10-28 18:02:37.990417] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.743 [2024-10-28 18:02:37.990507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.743 [2024-10-28 18:02:37.990529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.743 [2024-10-28 18:02:37.990559] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.743 [2024-10-28 18:02:37.990575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.743 [2024-10-28 18:02:37.990591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.743 [2024-10-28 18:02:37.990606] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.743 [2024-10-28 18:02:37.990621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.743 [2024-10-28 18:02:37.990635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.743 [2024-10-28 18:02:37.990652] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:21.743 [2024-10-28 18:02:37.990665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:21.743 [2024-10-28 18:02:37.990681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:21.743 18:02:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.743 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:21.743 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:22.001 [2024-10-28 18:02:38.387397] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:22.001 [2024-10-28 18:02:38.390058] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.001 [2024-10-28 18:02:38.390140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.001 [2024-10-28 18:02:38.390166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.001 [2024-10-28 18:02:38.390193] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.001 [2024-10-28 18:02:38.390209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.002 [2024-10-28 18:02:38.390223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.002 [2024-10-28 18:02:38.390239] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.002 [2024-10-28 18:02:38.390252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.002 [2024-10-28 18:02:38.390267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.002 [2024-10-28 18:02:38.390280] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:22.002 [2024-10-28 18:02:38.390295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:22.002 [2024-10-28 18:02:38.390307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:22.259 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:22.259 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:22.259 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:22.259 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:22.259 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:22.259 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:22.259 18:02:38 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.259 18:02:38 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:22.259 18:02:38 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.259 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:22.259 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:22.259 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.259 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.259 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:22.517 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:22.517 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.517 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:22.517 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:22.517 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:22.517 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:22.517 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:22.517 18:02:38 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.716 18:02:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.716 18:02:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.716 18:02:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:34.716 18:02:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:34.716 18:02:50 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.716 18:02:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:34.716 [2024-10-28 18:02:50.987640] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:34.716 [2024-10-28 18:02:50.990535] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.716 [2024-10-28 18:02:50.990608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.716 [2024-10-28 18:02:50.990630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.716 [2024-10-28 18:02:50.990659] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.716 [2024-10-28 18:02:50.990675] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.716 [2024-10-28 18:02:50.990692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.716 [2024-10-28 18:02:50.990723] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.716 [2024-10-28 18:02:50.990738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.716 [2024-10-28 18:02:50.990752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.716 [2024-10-28 18:02:50.990768] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.716 [2024-10-28 18:02:50.990782] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.716 [2024-10-28 18:02:50.990797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.716 18:02:50 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.716 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:34.716 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:34.974 [2024-10-28 18:02:51.387646] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:34.974 [2024-10-28 18:02:51.390615] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.975 [2024-10-28 18:02:51.390681] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.975 [2024-10-28 18:02:51.390709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.975 [2024-10-28 18:02:51.390736] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.975 [2024-10-28 18:02:51.390758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.975 [2024-10-28 18:02:51.390773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.975 [2024-10-28 18:02:51.390822] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.975 [2024-10-28 18:02:51.390836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.975 [2024-10-28 18:02:51.390866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:34.975 [2024-10-28 18:02:51.390900] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:34.975 [2024-10-28 18:02:51.390931] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:34.975 [2024-10-28 18:02:51.390945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:35.233 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:35.233 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:35.233 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:35.233 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:35.233 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:35.233 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:35.233 18:02:51 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.233 18:02:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:35.233 18:02:51 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.233 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:35.233 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:35.233 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:35.233 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:35.233 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:35.491 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:35.491 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:35.491 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:35.491 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:35.491 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:35.491 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:35.491 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:35.491 18:02:51 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.02 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.02 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.02 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.02 2 00:12:47.826 remove_attach_helper took 45.02s to complete (handling 2 nvme drive(s)) 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@707 -- # local cmd_es=0 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@709 -- # [[ -t 0 ]] 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@709 -- # exec 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@711 -- # local time=0 TIMEFORMAT=%2R 00:12:47.826 18:03:03 sw_hotplug -- common/autotest_common.sh@717 -- # remove_attach_helper 3 6 true 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:12:47.826 18:03:03 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:12:54.405 18:03:09 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:12:54.405 18:03:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:54.405 18:03:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:54.405 18:03:09 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:12:54.405 18:03:09 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:12:54.405 18:03:09 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:12:54.405 18:03:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:54.405 18:03:09 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:54.405 18:03:09 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:54.405 18:03:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:54.405 18:03:09 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:54.405 18:03:09 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.405 18:03:09 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.405 18:03:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.405 [2024-10-28 18:03:10.039346] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:12:54.405 [2024-10-28 18:03:10.041407] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.405 [2024-10-28 18:03:10.041477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.405 [2024-10-28 18:03:10.041498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.405 [2024-10-28 18:03:10.041529] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.405 [2024-10-28 18:03:10.041544] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.405 [2024-10-28 18:03:10.041574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.405 [2024-10-28 18:03:10.041589] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.405 [2024-10-28 18:03:10.041604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.405 [2024-10-28 18:03:10.041616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.405 [2024-10-28 18:03:10.041632] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.405 [2024-10-28 18:03:10.041644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.405 [2024-10-28 18:03:10.041662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:12:54.405 [2024-10-28 18:03:10.439309] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:12:54.405 [2024-10-28 18:03:10.441215] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.405 [2024-10-28 18:03:10.441275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.405 [2024-10-28 18:03:10.441313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.405 [2024-10-28 18:03:10.441336] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.405 [2024-10-28 18:03:10.441352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.405 [2024-10-28 18:03:10.441364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.405 [2024-10-28 18:03:10.441379] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.405 [2024-10-28 18:03:10.441391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.405 [2024-10-28 18:03:10.441404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.405 [2024-10-28 18:03:10.441417] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:12:54.405 [2024-10-28 18:03:10.441431] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:54.405 [2024-10-28 18:03:10.441442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:12:54.405 18:03:10 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:12:54.405 18:03:10 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:12:54.405 18:03:10 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:12:54.405 18:03:10 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:06.604 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:06.604 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:06.604 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:06.604 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:06.604 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:06.604 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:06.604 18:03:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.604 18:03:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:06.604 18:03:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.604 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:06.604 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:06.604 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:06.604 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:06.605 [2024-10-28 18:03:22.939498] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:06.605 [2024-10-28 18:03:22.941610] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.605 [2024-10-28 18:03:22.941700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.605 [2024-10-28 18:03:22.941722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.605 [2024-10-28 18:03:22.941750] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.605 [2024-10-28 18:03:22.941766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.605 [2024-10-28 18:03:22.941782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.605 [2024-10-28 18:03:22.941797] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.605 [2024-10-28 18:03:22.941812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.605 [2024-10-28 18:03:22.941825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.605 [2024-10-28 18:03:22.941860] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:06.605 [2024-10-28 18:03:22.941890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:06.605 [2024-10-28 18:03:22.941908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:06.605 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:06.605 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:06.605 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:06.605 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:06.605 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:06.605 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:06.605 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:06.605 18:03:22 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:06.605 18:03:22 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.605 18:03:22 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:06.605 18:03:22 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.605 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:06.605 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:06.863 [2024-10-28 18:03:23.339521] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:07.121 [2024-10-28 18:03:23.341570] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.121 [2024-10-28 18:03:23.341648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.121 [2024-10-28 18:03:23.341687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.121 [2024-10-28 18:03:23.341714] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.121 [2024-10-28 18:03:23.341734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.121 [2024-10-28 18:03:23.341748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.121 [2024-10-28 18:03:23.341764] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.121 [2024-10-28 18:03:23.341777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.121 [2024-10-28 18:03:23.341792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.121 [2024-10-28 18:03:23.341822] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:07.121 [2024-10-28 18:03:23.341838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:07.121 [2024-10-28 18:03:23.341851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:07.121 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:07.121 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:07.121 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:07.121 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:07.121 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:07.121 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:07.121 18:03:23 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.121 18:03:23 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:07.121 18:03:23 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.121 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:07.121 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:07.379 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:07.379 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:07.379 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:07.379 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:07.379 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:07.379 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:07.379 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:07.379 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:07.379 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:07.636 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:07.636 18:03:23 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:19.826 18:03:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:19.826 18:03:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:19.826 18:03:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:19.826 [2024-10-28 18:03:35.939647] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:13:19.826 [2024-10-28 18:03:35.941902] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.826 [2024-10-28 18:03:35.941992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.826 [2024-10-28 18:03:35.942015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.826 [2024-10-28 18:03:35.942048] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.826 [2024-10-28 18:03:35.942063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.826 [2024-10-28 18:03:35.942079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.826 [2024-10-28 18:03:35.942095] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.826 [2024-10-28 18:03:35.942114] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.826 [2024-10-28 18:03:35.942127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.826 [2024-10-28 18:03:35.942143] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:19.826 [2024-10-28 18:03:35.942157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:19.826 [2024-10-28 18:03:35.942172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:13:19.826 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:13:19.827 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:13:19.827 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:19.827 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:19.827 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:19.827 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:19.827 18:03:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:19.827 18:03:35 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:19.827 18:03:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:19.827 18:03:35 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:19.827 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:13:19.827 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:13:20.085 [2024-10-28 18:03:36.339640] nvme_ctrlr.c:1109:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:13:20.085 [2024-10-28 18:03:36.341918] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.085 [2024-10-28 18:03:36.341994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.085 [2024-10-28 18:03:36.342015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.085 [2024-10-28 18:03:36.342039] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.085 [2024-10-28 18:03:36.342055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.085 [2024-10-28 18:03:36.342068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.085 [2024-10-28 18:03:36.342084] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.085 [2024-10-28 18:03:36.342096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.085 [2024-10-28 18:03:36.342110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.085 [2024-10-28 18:03:36.342123] nvme_pcie_common.c: 772:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:13:20.085 [2024-10-28 18:03:36.342139] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:20.085 [2024-10-28 18:03:36.342151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:20.085 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:13:20.085 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:13:20.085 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:13:20.085 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:20.085 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:20.085 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:20.085 18:03:36 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:20.085 18:03:36 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:20.085 18:03:36 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:20.343 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:13:20.343 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:13:20.343 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:20.343 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:20.343 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:13:20.343 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:13:20.343 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:20.343 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:13:20.343 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:13:20.343 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:13:20.601 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:13:20.601 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:13:20.601 18:03:36 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:13:32.824 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:13:32.824 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:13:32.824 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:13:32.824 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:13:32.824 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:13:32.824 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:13:32.825 18:03:48 sw_hotplug -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.825 18:03:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:32.825 18:03:48 sw_hotplug -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.825 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:13:32.825 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:13:32.825 18:03:48 sw_hotplug -- common/autotest_common.sh@717 -- # time=45.01 00:13:32.825 18:03:48 sw_hotplug -- common/autotest_common.sh@718 -- # echo 45.01 00:13:32.825 18:03:48 sw_hotplug -- common/autotest_common.sh@720 -- # return 0 00:13:32.825 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.01 00:13:32.825 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.01 2 00:13:32.825 remove_attach_helper took 45.01s to complete (handling 2 nvme drive(s)) 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:13:32.825 18:03:48 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 68666 00:13:32.825 18:03:48 sw_hotplug -- common/autotest_common.sh@952 -- # '[' -z 68666 ']' 00:13:32.825 18:03:48 sw_hotplug -- common/autotest_common.sh@956 -- # kill -0 68666 00:13:32.825 18:03:48 sw_hotplug -- common/autotest_common.sh@957 -- # uname 00:13:32.825 18:03:48 sw_hotplug -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:32.825 18:03:48 sw_hotplug -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 68666 00:13:32.825 18:03:49 sw_hotplug -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:32.825 18:03:49 sw_hotplug -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:32.825 killing process with pid 68666 00:13:32.825 18:03:49 sw_hotplug -- common/autotest_common.sh@970 -- # echo 'killing process with pid 68666' 00:13:32.825 18:03:49 sw_hotplug -- common/autotest_common.sh@971 -- # kill 68666 00:13:32.825 18:03:49 sw_hotplug -- common/autotest_common.sh@976 -- # wait 68666 00:13:34.727 18:03:50 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:13:34.985 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:13:35.243 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:35.243 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:13:35.501 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.501 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:13:35.501 00:13:35.501 real 2m31.068s 00:13:35.501 user 1m51.036s 00:13:35.501 sys 0m19.685s 00:13:35.501 18:03:51 sw_hotplug -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:35.501 ************************************ 00:13:35.501 END TEST sw_hotplug 00:13:35.501 ************************************ 00:13:35.501 18:03:51 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:13:35.501 18:03:51 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:13:35.501 18:03:51 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:35.501 18:03:51 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:35.501 18:03:51 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:35.501 18:03:51 -- common/autotest_common.sh@10 -- # set +x 00:13:35.501 ************************************ 00:13:35.501 START TEST nvme_xnvme 00:13:35.501 ************************************ 00:13:35.501 18:03:51 nvme_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:13:35.759 * Looking for test storage... 00:13:35.759 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:13:35.759 18:03:52 nvme_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:35.759 18:03:52 nvme_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:13:35.759 18:03:52 nvme_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:35.759 18:03:52 nvme_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:35.759 18:03:52 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:13:35.760 18:03:52 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:13:35.760 18:03:52 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:35.760 18:03:52 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:35.760 18:03:52 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:13:35.760 18:03:52 nvme_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:35.760 18:03:52 nvme_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:35.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.760 --rc genhtml_branch_coverage=1 00:13:35.760 --rc genhtml_function_coverage=1 00:13:35.760 --rc genhtml_legend=1 00:13:35.760 --rc geninfo_all_blocks=1 00:13:35.760 --rc geninfo_unexecuted_blocks=1 00:13:35.760 00:13:35.760 ' 00:13:35.760 18:03:52 nvme_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:35.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.760 --rc genhtml_branch_coverage=1 00:13:35.760 --rc genhtml_function_coverage=1 00:13:35.760 --rc genhtml_legend=1 00:13:35.760 --rc geninfo_all_blocks=1 00:13:35.760 --rc geninfo_unexecuted_blocks=1 00:13:35.760 00:13:35.760 ' 00:13:35.760 18:03:52 nvme_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:35.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.760 --rc genhtml_branch_coverage=1 00:13:35.760 --rc genhtml_function_coverage=1 00:13:35.760 --rc genhtml_legend=1 00:13:35.760 --rc geninfo_all_blocks=1 00:13:35.760 --rc geninfo_unexecuted_blocks=1 00:13:35.760 00:13:35.760 ' 00:13:35.760 18:03:52 nvme_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:35.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:35.760 --rc genhtml_branch_coverage=1 00:13:35.760 --rc genhtml_function_coverage=1 00:13:35.760 --rc genhtml_legend=1 00:13:35.760 --rc geninfo_all_blocks=1 00:13:35.760 --rc geninfo_unexecuted_blocks=1 00:13:35.760 00:13:35.760 ' 00:13:35.760 18:03:52 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:35.760 18:03:52 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:13:35.760 18:03:52 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:35.760 18:03:52 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:35.760 18:03:52 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:35.760 18:03:52 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.760 18:03:52 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.760 18:03:52 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.760 18:03:52 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:13:35.760 18:03:52 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:35.760 18:03:52 nvme_xnvme -- xnvme/xnvme.sh@85 -- # run_test xnvme_to_malloc_dd_copy malloc_to_xnvme_copy 00:13:35.760 18:03:52 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:13:35.760 18:03:52 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:35.760 18:03:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:35.760 ************************************ 00:13:35.760 START TEST xnvme_to_malloc_dd_copy 00:13:35.760 ************************************ 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1127 -- # malloc_to_xnvme_copy 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@14 -- # init_null_blk gb=1 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@187 -- # return 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@16 -- # local mbdev0=malloc0 mbdev0_bs=512 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # xnvme_io=() 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@17 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@18 -- # local io 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@20 -- # xnvme_io+=(libaio) 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@21 -- # xnvme_io+=(io_uring) 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@25 -- # mbdev0_b=2097152 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@26 -- # xnvme0_dev=/dev/nullb0 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='2097152' ['block_size']='512') 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@28 -- # local -A method_bdev_malloc_create_0 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # method_bdev_xnvme_create_0=() 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@34 -- # local -A method_bdev_xnvme_create_0 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@35 -- # method_bdev_xnvme_create_0["name"]=null0 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@36 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:35.760 18:03:52 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:35.760 { 00:13:35.760 "subsystems": [ 00:13:35.760 { 00:13:35.760 "subsystem": "bdev", 00:13:35.760 "config": [ 00:13:35.760 { 00:13:35.760 "params": { 00:13:35.760 "block_size": 512, 00:13:35.760 "num_blocks": 2097152, 00:13:35.760 "name": "malloc0" 00:13:35.760 }, 00:13:35.760 "method": "bdev_malloc_create" 00:13:35.760 }, 00:13:35.760 { 00:13:35.760 "params": { 00:13:35.760 "io_mechanism": "libaio", 00:13:35.760 "filename": "/dev/nullb0", 00:13:35.760 "name": "null0" 00:13:35.760 }, 00:13:35.760 "method": "bdev_xnvme_create" 00:13:35.760 }, 00:13:35.760 { 00:13:35.760 "method": "bdev_wait_for_examine" 00:13:35.760 } 00:13:35.760 ] 00:13:35.760 } 00:13:35.760 ] 00:13:35.760 } 00:13:36.018 [2024-10-28 18:03:52.269871] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:13:36.018 [2024-10-28 18:03:52.270063] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70018 ] 00:13:36.018 [2024-10-28 18:03:52.456865] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:36.276 [2024-10-28 18:03:52.584155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:38.808  [2024-10-28T18:03:55.853Z] Copying: 191/1024 [MB] (191 MBps) [2024-10-28T18:03:56.791Z] Copying: 383/1024 [MB] (191 MBps) [2024-10-28T18:03:58.166Z] Copying: 577/1024 [MB] (194 MBps) [2024-10-28T18:03:59.100Z] Copying: 770/1024 [MB] (193 MBps) [2024-10-28T18:03:59.100Z] Copying: 964/1024 [MB] (193 MBps) [2024-10-28T18:04:02.383Z] Copying: 1024/1024 [MB] (average 193 MBps) 00:13:45.906 00:13:45.906 18:04:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:13:45.906 18:04:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:13:45.906 18:04:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:45.906 18:04:02 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:45.906 { 00:13:45.906 "subsystems": [ 00:13:45.906 { 00:13:45.906 "subsystem": "bdev", 00:13:45.906 "config": [ 00:13:45.906 { 00:13:45.906 "params": { 00:13:45.906 "block_size": 512, 00:13:45.906 "num_blocks": 2097152, 00:13:45.906 "name": "malloc0" 00:13:45.906 }, 00:13:45.906 "method": "bdev_malloc_create" 00:13:45.906 }, 00:13:45.906 { 00:13:45.906 "params": { 00:13:45.906 "io_mechanism": "libaio", 00:13:45.906 "filename": "/dev/nullb0", 00:13:45.906 "name": "null0" 00:13:45.906 }, 00:13:45.906 "method": "bdev_xnvme_create" 00:13:45.906 }, 00:13:45.906 { 00:13:45.906 "method": "bdev_wait_for_examine" 00:13:45.906 } 00:13:45.906 ] 00:13:45.906 } 00:13:45.906 ] 00:13:45.906 } 00:13:45.906 [2024-10-28 18:04:02.160241] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:13:45.906 [2024-10-28 18:04:02.160484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70133 ] 00:13:45.906 [2024-10-28 18:04:02.341530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:46.164 [2024-10-28 18:04:02.429635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.694  [2024-10-28T18:04:05.738Z] Copying: 184/1024 [MB] (184 MBps) [2024-10-28T18:04:06.673Z] Copying: 373/1024 [MB] (188 MBps) [2024-10-28T18:04:07.606Z] Copying: 561/1024 [MB] (187 MBps) [2024-10-28T18:04:08.981Z] Copying: 748/1024 [MB] (187 MBps) [2024-10-28T18:04:09.240Z] Copying: 937/1024 [MB] (189 MBps) [2024-10-28T18:04:12.528Z] Copying: 1024/1024 [MB] (average 187 MBps) 00:13:56.050 00:13:56.050 18:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@38 -- # for io in "${xnvme_io[@]}" 00:13:56.050 18:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@39 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:13:56.050 18:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=null0 --json /dev/fd/62 00:13:56.050 18:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@42 -- # gen_conf 00:13:56.050 18:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:13:56.050 18:04:12 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:13:56.050 { 00:13:56.050 "subsystems": [ 00:13:56.050 { 00:13:56.050 "subsystem": "bdev", 00:13:56.050 "config": [ 00:13:56.050 { 00:13:56.050 "params": { 00:13:56.050 "block_size": 512, 00:13:56.050 "num_blocks": 2097152, 00:13:56.050 "name": "malloc0" 00:13:56.050 }, 00:13:56.050 "method": "bdev_malloc_create" 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "params": { 00:13:56.050 "io_mechanism": "io_uring", 00:13:56.050 "filename": "/dev/nullb0", 00:13:56.050 "name": "null0" 00:13:56.050 }, 00:13:56.050 "method": "bdev_xnvme_create" 00:13:56.050 }, 00:13:56.050 { 00:13:56.050 "method": "bdev_wait_for_examine" 00:13:56.050 } 00:13:56.050 ] 00:13:56.050 } 00:13:56.050 ] 00:13:56.050 } 00:13:56.050 [2024-10-28 18:04:12.164671] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:13:56.050 [2024-10-28 18:04:12.164865] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70247 ] 00:13:56.050 [2024-10-28 18:04:12.346170] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:56.050 [2024-10-28 18:04:12.443296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.586  [2024-10-28T18:04:15.635Z] Copying: 205/1024 [MB] (205 MBps) [2024-10-28T18:04:16.570Z] Copying: 410/1024 [MB] (204 MBps) [2024-10-28T18:04:17.946Z] Copying: 615/1024 [MB] (204 MBps) [2024-10-28T18:04:18.885Z] Copying: 819/1024 [MB] (204 MBps) [2024-10-28T18:04:18.885Z] Copying: 1023/1024 [MB] (203 MBps) [2024-10-28T18:04:22.169Z] Copying: 1024/1024 [MB] (average 204 MBps) 00:14:05.691 00:14:05.691 18:04:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=null0 --ob=malloc0 --json /dev/fd/62 00:14:05.691 18:04:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@47 -- # gen_conf 00:14:05.691 18:04:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@31 -- # xtrace_disable 00:14:05.691 18:04:21 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:05.691 { 00:14:05.691 "subsystems": [ 00:14:05.691 { 00:14:05.691 "subsystem": "bdev", 00:14:05.691 "config": [ 00:14:05.691 { 00:14:05.691 "params": { 00:14:05.691 "block_size": 512, 00:14:05.691 "num_blocks": 2097152, 00:14:05.691 "name": "malloc0" 00:14:05.691 }, 00:14:05.691 "method": "bdev_malloc_create" 00:14:05.691 }, 00:14:05.691 { 00:14:05.691 "params": { 00:14:05.691 "io_mechanism": "io_uring", 00:14:05.691 "filename": "/dev/nullb0", 00:14:05.691 "name": "null0" 00:14:05.691 }, 00:14:05.691 "method": "bdev_xnvme_create" 00:14:05.691 }, 00:14:05.691 { 00:14:05.691 "method": "bdev_wait_for_examine" 00:14:05.691 } 00:14:05.691 ] 00:14:05.691 } 00:14:05.691 ] 00:14:05.691 } 00:14:05.691 [2024-10-28 18:04:21.684942] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:14:05.691 [2024-10-28 18:04:21.685141] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70352 ] 00:14:05.691 [2024-10-28 18:04:21.857338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:05.691 [2024-10-28 18:04:21.946665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.595  [2024-10-28T18:04:25.450Z] Copying: 203/1024 [MB] (203 MBps) [2024-10-28T18:04:26.385Z] Copying: 406/1024 [MB] (202 MBps) [2024-10-28T18:04:27.322Z] Copying: 608/1024 [MB] (202 MBps) [2024-10-28T18:04:28.257Z] Copying: 810/1024 [MB] (201 MBps) [2024-10-28T18:04:28.257Z] Copying: 1013/1024 [MB] (202 MBps) [2024-10-28T18:04:31.545Z] Copying: 1024/1024 [MB] (average 202 MBps) 00:14:15.067 00:14:15.067 18:04:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- xnvme/xnvme.sh@52 -- # remove_null_blk 00:14:15.067 18:04:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:15.067 00:14:15.067 real 0m39.018s 00:14:15.067 user 0m33.904s 00:14:15.067 sys 0m4.576s 00:14:15.067 18:04:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:15.067 18:04:31 nvme_xnvme.xnvme_to_malloc_dd_copy -- common/autotest_common.sh@10 -- # set +x 00:14:15.067 ************************************ 00:14:15.067 END TEST xnvme_to_malloc_dd_copy 00:14:15.067 ************************************ 00:14:15.067 18:04:31 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:15.067 18:04:31 nvme_xnvme -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:14:15.067 18:04:31 nvme_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:15.067 18:04:31 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:15.067 ************************************ 00:14:15.067 START TEST xnvme_bdevperf 00:14:15.067 ************************************ 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1127 -- # xnvme_bdevperf 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@57 -- # init_null_blk gb=1 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # [[ -e /sys/module/null_blk ]] 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@186 -- # modprobe null_blk gb=1 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@187 -- # return 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # xnvme_io=() 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@59 -- # local xnvme0=null0 xnvme0_dev xnvme_io 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@60 -- # local io 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@62 -- # xnvme_io+=(libaio) 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@63 -- # xnvme_io+=(io_uring) 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@65 -- # xnvme0_dev=/dev/nullb0 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # method_bdev_xnvme_create_0=() 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@67 -- # local -A method_bdev_xnvme_create_0 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@68 -- # method_bdev_xnvme_create_0["name"]=null0 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@69 -- # method_bdev_xnvme_create_0["filename"]=/dev/nullb0 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:15.067 18:04:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:15.067 { 00:14:15.067 "subsystems": [ 00:14:15.067 { 00:14:15.067 "subsystem": "bdev", 00:14:15.067 "config": [ 00:14:15.067 { 00:14:15.067 "params": { 00:14:15.067 "io_mechanism": "libaio", 00:14:15.067 "filename": "/dev/nullb0", 00:14:15.067 "name": "null0" 00:14:15.067 }, 00:14:15.067 "method": "bdev_xnvme_create" 00:14:15.067 }, 00:14:15.067 { 00:14:15.067 "method": "bdev_wait_for_examine" 00:14:15.067 } 00:14:15.067 ] 00:14:15.067 } 00:14:15.067 ] 00:14:15.067 } 00:14:15.067 [2024-10-28 18:04:31.340263] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:14:15.067 [2024-10-28 18:04:31.340455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70479 ] 00:14:15.067 [2024-10-28 18:04:31.520646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.326 [2024-10-28 18:04:31.609265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:15.585 Running I/O for 5 seconds... 00:14:17.456 127232.00 IOPS, 497.00 MiB/s [2024-10-28T18:04:35.309Z] 127776.00 IOPS, 499.12 MiB/s [2024-10-28T18:04:36.244Z] 127701.33 IOPS, 498.83 MiB/s [2024-10-28T18:04:37.180Z] 127984.00 IOPS, 499.94 MiB/s [2024-10-28T18:04:37.180Z] 128358.40 IOPS, 501.40 MiB/s 00:14:20.702 Latency(us) 00:14:20.702 [2024-10-28T18:04:37.180Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.702 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:20.702 null0 : 5.00 128301.12 501.18 0.00 0.00 495.82 426.36 2412.92 00:14:20.702 [2024-10-28T18:04:37.180Z] =================================================================================================================== 00:14:20.702 [2024-10-28T18:04:37.180Z] Total : 128301.12 501.18 0.00 0.00 495.82 426.36 2412.92 00:14:21.638 18:04:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@71 -- # for io in "${xnvme_io[@]}" 00:14:21.638 18:04:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@72 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:14:21.638 18:04:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T null0 -o 4096 00:14:21.638 18:04:37 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@74 -- # gen_conf 00:14:21.638 18:04:37 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:21.638 18:04:37 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:21.638 { 00:14:21.638 "subsystems": [ 00:14:21.638 { 00:14:21.638 "subsystem": "bdev", 00:14:21.638 "config": [ 00:14:21.638 { 00:14:21.638 "params": { 00:14:21.638 "io_mechanism": "io_uring", 00:14:21.638 "filename": "/dev/nullb0", 00:14:21.638 "name": "null0" 00:14:21.638 }, 00:14:21.638 "method": "bdev_xnvme_create" 00:14:21.638 }, 00:14:21.638 { 00:14:21.638 "method": "bdev_wait_for_examine" 00:14:21.638 } 00:14:21.638 ] 00:14:21.638 } 00:14:21.638 ] 00:14:21.638 } 00:14:21.638 [2024-10-28 18:04:37.891570] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:14:21.638 [2024-10-28 18:04:37.891741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70559 ] 00:14:21.638 [2024-10-28 18:04:38.069448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.897 [2024-10-28 18:04:38.172211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.155 Running I/O for 5 seconds... 00:14:24.036 170496.00 IOPS, 666.00 MiB/s [2024-10-28T18:04:41.887Z] 170048.00 IOPS, 664.25 MiB/s [2024-10-28T18:04:42.455Z] 170965.33 IOPS, 667.83 MiB/s [2024-10-28T18:04:43.830Z] 170720.00 IOPS, 666.88 MiB/s 00:14:27.352 Latency(us) 00:14:27.352 [2024-10-28T18:04:43.830Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.352 Job: null0 (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:27.352 null0 : 5.00 170474.01 665.91 0.00 0.00 372.52 195.49 2398.02 00:14:27.352 [2024-10-28T18:04:43.830Z] =================================================================================================================== 00:14:27.352 [2024-10-28T18:04:43.830Z] Total : 170474.01 665.91 0.00 0.00 372.52 195.49 2398.02 00:14:27.919 18:04:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@82 -- # remove_null_blk 00:14:27.919 18:04:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@191 -- # modprobe -r null_blk 00:14:27.919 ************************************ 00:14:27.919 END TEST xnvme_bdevperf 00:14:27.919 ************************************ 00:14:27.919 00:14:27.919 real 0m13.160s 00:14:27.919 user 0m10.216s 00:14:27.919 sys 0m2.710s 00:14:27.919 18:04:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:27.919 18:04:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:28.177 ************************************ 00:14:28.177 END TEST nvme_xnvme 00:14:28.177 ************************************ 00:14:28.177 00:14:28.177 real 0m52.463s 00:14:28.177 user 0m44.256s 00:14:28.177 sys 0m7.429s 00:14:28.177 18:04:44 nvme_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:28.177 18:04:44 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.177 18:04:44 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:28.177 18:04:44 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:28.177 18:04:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:28.177 18:04:44 -- common/autotest_common.sh@10 -- # set +x 00:14:28.177 ************************************ 00:14:28.177 START TEST blockdev_xnvme 00:14:28.177 ************************************ 00:14:28.177 18:04:44 blockdev_xnvme -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:14:28.177 * Looking for test storage... 00:14:28.177 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:14:28.177 18:04:44 blockdev_xnvme -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:28.177 18:04:44 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lcov --version 00:14:28.177 18:04:44 blockdev_xnvme -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:28.177 18:04:44 blockdev_xnvme -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:28.177 18:04:44 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:28.177 18:04:44 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:28.177 18:04:44 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:28.177 18:04:44 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:28.178 18:04:44 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:14:28.178 18:04:44 blockdev_xnvme -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:28.178 18:04:44 blockdev_xnvme -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:28.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.178 --rc genhtml_branch_coverage=1 00:14:28.178 --rc genhtml_function_coverage=1 00:14:28.178 --rc genhtml_legend=1 00:14:28.178 --rc geninfo_all_blocks=1 00:14:28.178 --rc geninfo_unexecuted_blocks=1 00:14:28.178 00:14:28.178 ' 00:14:28.178 18:04:44 blockdev_xnvme -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:28.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.178 --rc genhtml_branch_coverage=1 00:14:28.178 --rc genhtml_function_coverage=1 00:14:28.178 --rc genhtml_legend=1 00:14:28.178 --rc geninfo_all_blocks=1 00:14:28.178 --rc geninfo_unexecuted_blocks=1 00:14:28.178 00:14:28.178 ' 00:14:28.178 18:04:44 blockdev_xnvme -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:28.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.178 --rc genhtml_branch_coverage=1 00:14:28.178 --rc genhtml_function_coverage=1 00:14:28.178 --rc genhtml_legend=1 00:14:28.178 --rc geninfo_all_blocks=1 00:14:28.178 --rc geninfo_unexecuted_blocks=1 00:14:28.178 00:14:28.178 ' 00:14:28.178 18:04:44 blockdev_xnvme -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:28.178 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:28.178 --rc genhtml_branch_coverage=1 00:14:28.178 --rc genhtml_function_coverage=1 00:14:28.178 --rc genhtml_legend=1 00:14:28.178 --rc geninfo_all_blocks=1 00:14:28.178 --rc geninfo_unexecuted_blocks=1 00:14:28.178 00:14:28.178 ' 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@673 -- # uname -s 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@681 -- # test_type=xnvme 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@682 -- # crypto_device= 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@683 -- # dek= 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@684 -- # env_ctx= 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == bdev ]] 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@689 -- # [[ xnvme == crypto_* ]] 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=70701 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:14:28.178 18:04:44 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 70701 00:14:28.178 18:04:44 blockdev_xnvme -- common/autotest_common.sh@833 -- # '[' -z 70701 ']' 00:14:28.178 18:04:44 blockdev_xnvme -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.437 18:04:44 blockdev_xnvme -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:28.437 18:04:44 blockdev_xnvme -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.437 18:04:44 blockdev_xnvme -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:28.437 18:04:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:28.437 [2024-10-28 18:04:44.772658] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:14:28.437 [2024-10-28 18:04:44.772825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70701 ] 00:14:28.696 [2024-10-28 18:04:44.953648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:28.696 [2024-10-28 18:04:45.056723] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:29.631 18:04:45 blockdev_xnvme -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:29.631 18:04:45 blockdev_xnvme -- common/autotest_common.sh@866 -- # return 0 00:14:29.631 18:04:45 blockdev_xnvme -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:14:29.631 18:04:45 blockdev_xnvme -- bdev/blockdev.sh@728 -- # setup_xnvme_conf 00:14:29.631 18:04:45 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:14:29.631 18:04:45 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:14:29.632 18:04:45 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:14:29.632 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:29.890 Waiting for block devices as requested 00:14:29.890 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:14:30.149 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:14:30.149 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:14:30.149 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:14:35.418 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1656 -- # local nvme bdf 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n2 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n2 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n3 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme2n3 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3c3n1 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3c3n1 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1659 -- # is_block_zoned nvme3n1 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1648 -- # local device=nvme3n1 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:14:35.418 18:04:51 blockdev_xnvme -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n2 ]] 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n3 ]] 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:35.418 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism") 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring' 'bdev_xnvme_create /dev/nvme2n2 nvme2n2 io_uring' 'bdev_xnvme_create /dev/nvme2n3 nvme2n3 io_uring' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring' 00:14:35.419 nvme0n1 00:14:35.419 nvme1n1 00:14:35.419 nvme2n1 00:14:35.419 nvme2n2 00:14:35.419 nvme2n3 00:14:35.419 nvme3n1 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # cat 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:35.419 18:04:51 blockdev_xnvme -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # jq -r .name 00:14:35.419 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d4a04840-0bb9-4a54-9442-f20cd87ff3cf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d4a04840-0bb9-4a54-9442-f20cd87ff3cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2111ba86-b52d-4eb6-8ac5-8816a933774c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2111ba86-b52d-4eb6-8ac5-8816a933774c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d8577ba6-d76e-46bb-9db7-03061c1154f9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d8577ba6-d76e-46bb-9db7-03061c1154f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "baf09db6-7087-4a86-98ed-849324441793"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "baf09db6-7087-4a86-98ed-849324441793",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "4a20c13c-1869-4a7d-ae66-e42413f70789"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4a20c13c-1869-4a7d-ae66-e42413f70789",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "dc9c77d3-5d11-4006-87d3-eba7f353ce43"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dc9c77d3-5d11-4006-87d3-eba7f353ce43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:14:35.678 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:14:35.678 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@751 -- # hello_world_bdev=nvme0n1 00:14:35.679 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:14:35.679 18:04:51 blockdev_xnvme -- bdev/blockdev.sh@753 -- # killprocess 70701 00:14:35.679 18:04:51 blockdev_xnvme -- common/autotest_common.sh@952 -- # '[' -z 70701 ']' 00:14:35.679 18:04:51 blockdev_xnvme -- common/autotest_common.sh@956 -- # kill -0 70701 00:14:35.679 18:04:51 blockdev_xnvme -- common/autotest_common.sh@957 -- # uname 00:14:35.679 18:04:51 blockdev_xnvme -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:35.679 18:04:51 blockdev_xnvme -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 70701 00:14:35.679 killing process with pid 70701 00:14:35.679 18:04:51 blockdev_xnvme -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:35.679 18:04:51 blockdev_xnvme -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:35.679 18:04:51 blockdev_xnvme -- common/autotest_common.sh@970 -- # echo 'killing process with pid 70701' 00:14:35.679 18:04:51 blockdev_xnvme -- common/autotest_common.sh@971 -- # kill 70701 00:14:35.679 18:04:51 blockdev_xnvme -- common/autotest_common.sh@976 -- # wait 70701 00:14:37.582 18:04:53 blockdev_xnvme -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:14:37.582 18:04:53 blockdev_xnvme -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:37.582 18:04:53 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 7 -le 1 ']' 00:14:37.582 18:04:53 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:37.582 18:04:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:37.582 ************************************ 00:14:37.582 START TEST bdev_hello_world 00:14:37.582 ************************************ 00:14:37.582 18:04:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:14:37.582 [2024-10-28 18:04:53.838524] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:14:37.582 [2024-10-28 18:04:53.838709] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71069 ] 00:14:37.582 [2024-10-28 18:04:54.007875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:37.844 [2024-10-28 18:04:54.096202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.102 [2024-10-28 18:04:54.468969] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:14:38.102 [2024-10-28 18:04:54.469034] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:14:38.102 [2024-10-28 18:04:54.469071] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:14:38.102 [2024-10-28 18:04:54.471065] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:14:38.103 [2024-10-28 18:04:54.471387] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:14:38.103 [2024-10-28 18:04:54.471417] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:14:38.103 [2024-10-28 18:04:54.471745] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:14:38.103 00:14:38.103 [2024-10-28 18:04:54.471781] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:14:39.039 ************************************ 00:14:39.039 END TEST bdev_hello_world 00:14:39.039 ************************************ 00:14:39.039 00:14:39.039 real 0m1.565s 00:14:39.039 user 0m1.260s 00:14:39.039 sys 0m0.191s 00:14:39.039 18:04:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:39.039 18:04:55 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:14:39.039 18:04:55 blockdev_xnvme -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:14:39.039 18:04:55 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:39.039 18:04:55 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:39.039 18:04:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:39.039 ************************************ 00:14:39.039 START TEST bdev_bounds 00:14:39.039 ************************************ 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1127 -- # bdev_bounds '' 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=71110 00:14:39.039 Process bdevio pid: 71110 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 71110' 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 71110 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@833 -- # '[' -z 71110 ']' 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:39.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:39.039 18:04:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:39.039 [2024-10-28 18:04:55.460918] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:14:39.039 [2024-10-28 18:04:55.461088] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71110 ] 00:14:39.298 [2024-10-28 18:04:55.639046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:39.298 [2024-10-28 18:04:55.730757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:39.298 [2024-10-28 18:04:55.730827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:39.298 [2024-10-28 18:04:55.730823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.236 18:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:40.236 18:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@866 -- # return 0 00:14:40.236 18:04:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:14:40.236 I/O targets: 00:14:40.236 nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:14:40.236 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:14:40.236 nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:40.236 nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:40.236 nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:14:40.236 nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:14:40.236 00:14:40.236 00:14:40.236 CUnit - A unit testing framework for C - Version 2.1-3 00:14:40.236 http://cunit.sourceforge.net/ 00:14:40.236 00:14:40.236 00:14:40.236 Suite: bdevio tests on: nvme3n1 00:14:40.236 Test: blockdev write read block ...passed 00:14:40.236 Test: blockdev write zeroes read block ...passed 00:14:40.236 Test: blockdev write zeroes read no split ...passed 00:14:40.236 Test: blockdev write zeroes read split ...passed 00:14:40.236 Test: blockdev write zeroes read split partial ...passed 00:14:40.236 Test: blockdev reset ...passed 00:14:40.236 Test: blockdev write read 8 blocks ...passed 00:14:40.236 Test: blockdev write read size > 128k ...passed 00:14:40.236 Test: blockdev write read invalid size ...passed 00:14:40.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:40.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:40.236 Test: blockdev write read max offset ...passed 00:14:40.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:40.236 Test: blockdev writev readv 8 blocks ...passed 00:14:40.236 Test: blockdev writev readv 30 x 1block ...passed 00:14:40.236 Test: blockdev writev readv block ...passed 00:14:40.236 Test: blockdev writev readv size > 128k ...passed 00:14:40.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:40.236 Test: blockdev comparev and writev ...passed 00:14:40.236 Test: blockdev nvme passthru rw ...passed 00:14:40.236 Test: blockdev nvme passthru vendor specific ...passed 00:14:40.236 Test: blockdev nvme admin passthru ...passed 00:14:40.236 Test: blockdev copy ...passed 00:14:40.236 Suite: bdevio tests on: nvme2n3 00:14:40.236 Test: blockdev write read block ...passed 00:14:40.236 Test: blockdev write zeroes read block ...passed 00:14:40.236 Test: blockdev write zeroes read no split ...passed 00:14:40.236 Test: blockdev write zeroes read split ...passed 00:14:40.236 Test: blockdev write zeroes read split partial ...passed 00:14:40.236 Test: blockdev reset ...passed 00:14:40.236 Test: blockdev write read 8 blocks ...passed 00:14:40.236 Test: blockdev write read size > 128k ...passed 00:14:40.236 Test: blockdev write read invalid size ...passed 00:14:40.236 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:40.236 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:40.236 Test: blockdev write read max offset ...passed 00:14:40.236 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:40.236 Test: blockdev writev readv 8 blocks ...passed 00:14:40.236 Test: blockdev writev readv 30 x 1block ...passed 00:14:40.236 Test: blockdev writev readv block ...passed 00:14:40.236 Test: blockdev writev readv size > 128k ...passed 00:14:40.236 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:40.236 Test: blockdev comparev and writev ...passed 00:14:40.236 Test: blockdev nvme passthru rw ...passed 00:14:40.236 Test: blockdev nvme passthru vendor specific ...passed 00:14:40.236 Test: blockdev nvme admin passthru ...passed 00:14:40.236 Test: blockdev copy ...passed 00:14:40.236 Suite: bdevio tests on: nvme2n2 00:14:40.236 Test: blockdev write read block ...passed 00:14:40.236 Test: blockdev write zeroes read block ...passed 00:14:40.236 Test: blockdev write zeroes read no split ...passed 00:14:40.236 Test: blockdev write zeroes read split ...passed 00:14:40.495 Test: blockdev write zeroes read split partial ...passed 00:14:40.495 Test: blockdev reset ...passed 00:14:40.495 Test: blockdev write read 8 blocks ...passed 00:14:40.495 Test: blockdev write read size > 128k ...passed 00:14:40.495 Test: blockdev write read invalid size ...passed 00:14:40.495 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:40.495 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:40.495 Test: blockdev write read max offset ...passed 00:14:40.495 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:40.495 Test: blockdev writev readv 8 blocks ...passed 00:14:40.495 Test: blockdev writev readv 30 x 1block ...passed 00:14:40.495 Test: blockdev writev readv block ...passed 00:14:40.495 Test: blockdev writev readv size > 128k ...passed 00:14:40.495 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:40.495 Test: blockdev comparev and writev ...passed 00:14:40.495 Test: blockdev nvme passthru rw ...passed 00:14:40.495 Test: blockdev nvme passthru vendor specific ...passed 00:14:40.495 Test: blockdev nvme admin passthru ...passed 00:14:40.495 Test: blockdev copy ...passed 00:14:40.495 Suite: bdevio tests on: nvme2n1 00:14:40.495 Test: blockdev write read block ...passed 00:14:40.495 Test: blockdev write zeroes read block ...passed 00:14:40.495 Test: blockdev write zeroes read no split ...passed 00:14:40.495 Test: blockdev write zeroes read split ...passed 00:14:40.495 Test: blockdev write zeroes read split partial ...passed 00:14:40.495 Test: blockdev reset ...passed 00:14:40.495 Test: blockdev write read 8 blocks ...passed 00:14:40.495 Test: blockdev write read size > 128k ...passed 00:14:40.495 Test: blockdev write read invalid size ...passed 00:14:40.495 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:40.495 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:40.495 Test: blockdev write read max offset ...passed 00:14:40.495 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:40.495 Test: blockdev writev readv 8 blocks ...passed 00:14:40.495 Test: blockdev writev readv 30 x 1block ...passed 00:14:40.495 Test: blockdev writev readv block ...passed 00:14:40.495 Test: blockdev writev readv size > 128k ...passed 00:14:40.495 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:40.495 Test: blockdev comparev and writev ...passed 00:14:40.495 Test: blockdev nvme passthru rw ...passed 00:14:40.495 Test: blockdev nvme passthru vendor specific ...passed 00:14:40.495 Test: blockdev nvme admin passthru ...passed 00:14:40.495 Test: blockdev copy ...passed 00:14:40.495 Suite: bdevio tests on: nvme1n1 00:14:40.495 Test: blockdev write read block ...passed 00:14:40.496 Test: blockdev write zeroes read block ...passed 00:14:40.496 Test: blockdev write zeroes read no split ...passed 00:14:40.496 Test: blockdev write zeroes read split ...passed 00:14:40.496 Test: blockdev write zeroes read split partial ...passed 00:14:40.496 Test: blockdev reset ...passed 00:14:40.496 Test: blockdev write read 8 blocks ...passed 00:14:40.496 Test: blockdev write read size > 128k ...passed 00:14:40.496 Test: blockdev write read invalid size ...passed 00:14:40.496 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:40.496 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:40.496 Test: blockdev write read max offset ...passed 00:14:40.496 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:40.496 Test: blockdev writev readv 8 blocks ...passed 00:14:40.496 Test: blockdev writev readv 30 x 1block ...passed 00:14:40.496 Test: blockdev writev readv block ...passed 00:14:40.496 Test: blockdev writev readv size > 128k ...passed 00:14:40.496 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:40.496 Test: blockdev comparev and writev ...passed 00:14:40.496 Test: blockdev nvme passthru rw ...passed 00:14:40.496 Test: blockdev nvme passthru vendor specific ...passed 00:14:40.496 Test: blockdev nvme admin passthru ...passed 00:14:40.496 Test: blockdev copy ...passed 00:14:40.496 Suite: bdevio tests on: nvme0n1 00:14:40.496 Test: blockdev write read block ...passed 00:14:40.496 Test: blockdev write zeroes read block ...passed 00:14:40.496 Test: blockdev write zeroes read no split ...passed 00:14:40.496 Test: blockdev write zeroes read split ...passed 00:14:40.496 Test: blockdev write zeroes read split partial ...passed 00:14:40.496 Test: blockdev reset ...passed 00:14:40.496 Test: blockdev write read 8 blocks ...passed 00:14:40.496 Test: blockdev write read size > 128k ...passed 00:14:40.496 Test: blockdev write read invalid size ...passed 00:14:40.496 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:14:40.496 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:14:40.496 Test: blockdev write read max offset ...passed 00:14:40.496 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:14:40.496 Test: blockdev writev readv 8 blocks ...passed 00:14:40.496 Test: blockdev writev readv 30 x 1block ...passed 00:14:40.496 Test: blockdev writev readv block ...passed 00:14:40.496 Test: blockdev writev readv size > 128k ...passed 00:14:40.496 Test: blockdev writev readv size > 128k in two iovs ...passed 00:14:40.496 Test: blockdev comparev and writev ...passed 00:14:40.496 Test: blockdev nvme passthru rw ...passed 00:14:40.496 Test: blockdev nvme passthru vendor specific ...passed 00:14:40.496 Test: blockdev nvme admin passthru ...passed 00:14:40.496 Test: blockdev copy ...passed 00:14:40.496 00:14:40.496 Run Summary: Type Total Ran Passed Failed Inactive 00:14:40.496 suites 6 6 n/a 0 0 00:14:40.496 tests 138 138 138 0 0 00:14:40.496 asserts 780 780 780 0 n/a 00:14:40.496 00:14:40.496 Elapsed time = 1.210 seconds 00:14:40.496 0 00:14:40.755 18:04:56 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 71110 00:14:40.755 18:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@952 -- # '[' -z 71110 ']' 00:14:40.755 18:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@956 -- # kill -0 71110 00:14:40.755 18:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # uname 00:14:40.755 18:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:40.755 18:04:56 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71110 00:14:40.755 killing process with pid 71110 00:14:40.755 18:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:40.755 18:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:40.755 18:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71110' 00:14:40.755 18:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@971 -- # kill 71110 00:14:40.755 18:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@976 -- # wait 71110 00:14:41.689 18:04:57 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:14:41.689 00:14:41.689 real 0m2.567s 00:14:41.689 user 0m6.543s 00:14:41.689 sys 0m0.338s 00:14:41.689 ************************************ 00:14:41.689 END TEST bdev_bounds 00:14:41.689 ************************************ 00:14:41.689 18:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:41.689 18:04:57 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:14:41.689 18:04:57 blockdev_xnvme -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:41.689 18:04:57 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:14:41.690 18:04:57 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:41.690 18:04:57 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:41.690 ************************************ 00:14:41.690 START TEST bdev_nbd 00:14:41.690 ************************************ 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1127 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '' 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=71167 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 71167 /var/tmp/spdk-nbd.sock 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@833 -- # '[' -z 71167 ']' 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:41.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:41.690 18:04:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:41.690 [2024-10-28 18:04:58.094118] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:14:41.690 [2024-10-28 18:04:58.094627] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.948 [2024-10-28 18:04:58.277785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:41.948 [2024-10-28 18:04:58.377397] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@866 -- # return 0 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:42.882 1+0 records in 00:14:42.882 1+0 records out 00:14:42.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448374 s, 9.1 MB/s 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:42.882 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:14:43.140 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:14:43.140 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:14:43.140 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:14:43.140 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:43.140 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:43.140 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:43.140 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:43.140 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:43.140 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:43.140 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:43.140 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:43.141 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.141 1+0 records in 00:14:43.141 1+0 records out 00:14:43.141 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000760654 s, 5.4 MB/s 00:14:43.141 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.141 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:43.141 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.141 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:43.141 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:43.141 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:43.141 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:43.141 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd2 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd2 /proc/partitions 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.707 1+0 records in 00:14:43.707 1+0 records out 00:14:43.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00072746 s, 5.6 MB/s 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:43.707 18:04:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd3 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd3 /proc/partitions 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:43.968 1+0 records in 00:14:43.968 1+0 records out 00:14:43.968 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000849034 s, 4.8 MB/s 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:43.968 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd4 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd4 /proc/partitions 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.262 1+0 records in 00:14:44.262 1+0 records out 00:14:44.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000625034 s, 6.6 MB/s 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:44.262 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:14:44.518 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:14:44.518 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:14:44.518 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:14:44.518 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd5 00:14:44.518 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:44.518 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:44.518 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:44.518 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd5 /proc/partitions 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:44.519 1+0 records in 00:14:44.519 1+0 records out 00:14:44.519 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000949788 s, 4.3 MB/s 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:14:44.519 18:05:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:44.776 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:14:44.776 { 00:14:44.776 "nbd_device": "/dev/nbd0", 00:14:44.776 "bdev_name": "nvme0n1" 00:14:44.776 }, 00:14:44.776 { 00:14:44.777 "nbd_device": "/dev/nbd1", 00:14:44.777 "bdev_name": "nvme1n1" 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "nbd_device": "/dev/nbd2", 00:14:44.777 "bdev_name": "nvme2n1" 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "nbd_device": "/dev/nbd3", 00:14:44.777 "bdev_name": "nvme2n2" 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "nbd_device": "/dev/nbd4", 00:14:44.777 "bdev_name": "nvme2n3" 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "nbd_device": "/dev/nbd5", 00:14:44.777 "bdev_name": "nvme3n1" 00:14:44.777 } 00:14:44.777 ]' 00:14:44.777 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:14:44.777 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:14:44.777 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:14:44.777 { 00:14:44.777 "nbd_device": "/dev/nbd0", 00:14:44.777 "bdev_name": "nvme0n1" 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "nbd_device": "/dev/nbd1", 00:14:44.777 "bdev_name": "nvme1n1" 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "nbd_device": "/dev/nbd2", 00:14:44.777 "bdev_name": "nvme2n1" 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "nbd_device": "/dev/nbd3", 00:14:44.777 "bdev_name": "nvme2n2" 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "nbd_device": "/dev/nbd4", 00:14:44.777 "bdev_name": "nvme2n3" 00:14:44.777 }, 00:14:44.777 { 00:14:44.777 "nbd_device": "/dev/nbd5", 00:14:44.777 "bdev_name": "nvme3n1" 00:14:44.777 } 00:14:44.777 ]' 00:14:44.777 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:14:44.777 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:44.777 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:14:44.777 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:44.777 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:44.777 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.777 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:45.035 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:45.035 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:45.035 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:45.035 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.035 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.035 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:45.035 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:45.035 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.035 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.035 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:45.292 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:45.292 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:45.292 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:45.292 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.292 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.292 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:45.292 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:45.292 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.292 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.292 18:05:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:14:45.549 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:14:45.549 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:14:45.549 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:14:45.549 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.549 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.549 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:14:45.549 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:45.549 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.549 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.549 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:14:45.808 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:14:45.808 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:14:45.808 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:14:45.808 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:45.808 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:45.808 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:14:45.808 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:45.808 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:45.808 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:45.808 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:14:46.066 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:14:46.066 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:14:46.066 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:14:46.066 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.066 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.066 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:14:46.066 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:46.066 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.066 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.066 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:14:46.632 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:14:46.632 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:14:46.632 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:14:46.632 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.632 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.632 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:14:46.632 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:46.632 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.632 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:46.632 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:46.632 18:05:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme1n1 nvme2n1 nvme2n2 nvme2n3 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme1n1' 'nvme2n1' 'nvme2n2' 'nvme2n3' 'nvme3n1') 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:46.890 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:14:47.147 /dev/nbd0 00:14:47.147 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.148 1+0 records in 00:14:47.148 1+0 records out 00:14:47.148 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00162056 s, 2.5 MB/s 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:47.148 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd1 00:14:47.405 /dev/nbd1 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.405 1+0 records in 00:14:47.405 1+0 records out 00:14:47.405 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000735434 s, 5.6 MB/s 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:47.405 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd10 00:14:47.663 /dev/nbd10 00:14:47.663 18:05:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd10 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd10 /proc/partitions 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.663 1+0 records in 00:14:47.663 1+0 records out 00:14:47.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000643468 s, 6.4 MB/s 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:47.663 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n2 /dev/nbd11 00:14:47.922 /dev/nbd11 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd11 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd11 /proc/partitions 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:47.922 1+0 records in 00:14:47.922 1+0 records out 00:14:47.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000511325 s, 8.0 MB/s 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:47.922 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n3 /dev/nbd12 00:14:48.181 /dev/nbd12 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd12 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd12 /proc/partitions 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.181 1+0 records in 00:14:48.181 1+0 records out 00:14:48.181 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000829874 s, 4.9 MB/s 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:48.181 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:14:48.440 /dev/nbd13 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@870 -- # local nbd_name=nbd13 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@871 -- # local i 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@874 -- # grep -q -w nbd13 /proc/partitions 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # break 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@887 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:48.699 1+0 records in 00:14:48.699 1+0 records out 00:14:48.699 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000620454 s, 6.6 MB/s 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # size=4096 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # return 0 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:48.699 18:05:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:48.957 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd0", 00:14:48.957 "bdev_name": "nvme0n1" 00:14:48.957 }, 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd1", 00:14:48.957 "bdev_name": "nvme1n1" 00:14:48.957 }, 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd10", 00:14:48.957 "bdev_name": "nvme2n1" 00:14:48.957 }, 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd11", 00:14:48.957 "bdev_name": "nvme2n2" 00:14:48.957 }, 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd12", 00:14:48.957 "bdev_name": "nvme2n3" 00:14:48.957 }, 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd13", 00:14:48.957 "bdev_name": "nvme3n1" 00:14:48.957 } 00:14:48.957 ]' 00:14:48.957 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd0", 00:14:48.957 "bdev_name": "nvme0n1" 00:14:48.957 }, 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd1", 00:14:48.957 "bdev_name": "nvme1n1" 00:14:48.957 }, 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd10", 00:14:48.957 "bdev_name": "nvme2n1" 00:14:48.957 }, 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd11", 00:14:48.957 "bdev_name": "nvme2n2" 00:14:48.957 }, 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd12", 00:14:48.957 "bdev_name": "nvme2n3" 00:14:48.957 }, 00:14:48.957 { 00:14:48.957 "nbd_device": "/dev/nbd13", 00:14:48.957 "bdev_name": "nvme3n1" 00:14:48.957 } 00:14:48.957 ]' 00:14:48.957 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:48.957 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:14:48.957 /dev/nbd1 00:14:48.957 /dev/nbd10 00:14:48.957 /dev/nbd11 00:14:48.957 /dev/nbd12 00:14:48.957 /dev/nbd13' 00:14:48.957 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:14:48.957 /dev/nbd1 00:14:48.957 /dev/nbd10 00:14:48.957 /dev/nbd11 00:14:48.957 /dev/nbd12 00:14:48.957 /dev/nbd13' 00:14:48.957 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:48.957 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:14:48.957 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:14:48.957 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:14:48.957 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:14:48.957 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:14:48.958 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:48.958 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:48.958 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:14:48.958 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:48.958 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:14:48.958 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:14:48.958 256+0 records in 00:14:48.958 256+0 records out 00:14:48.958 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00784516 s, 134 MB/s 00:14:48.958 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:48.958 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:14:49.216 256+0 records in 00:14:49.216 256+0 records out 00:14:49.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163376 s, 6.4 MB/s 00:14:49.216 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:49.216 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:14:49.216 256+0 records in 00:14:49.216 256+0 records out 00:14:49.216 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.170786 s, 6.1 MB/s 00:14:49.216 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:49.216 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:14:49.474 256+0 records in 00:14:49.474 256+0 records out 00:14:49.474 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163565 s, 6.4 MB/s 00:14:49.474 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:49.474 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:14:49.732 256+0 records in 00:14:49.732 256+0 records out 00:14:49.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.157478 s, 6.7 MB/s 00:14:49.732 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:49.732 18:05:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:14:49.732 256+0 records in 00:14:49.732 256+0 records out 00:14:49.732 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.159591 s, 6.6 MB/s 00:14:49.732 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:14:49.732 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:14:49.991 256+0 records in 00:14:49.991 256+0 records out 00:14:49.991 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.163303 s, 6.4 MB/s 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:49.991 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:50.248 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:50.248 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:50.248 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:50.249 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.249 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.249 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:50.249 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:50.249 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.249 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.249 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:14:50.507 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:50.507 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:50.507 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:50.507 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.507 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.507 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:50.507 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:50.507 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.507 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.507 18:05:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:14:50.765 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:14:50.765 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:14:50.765 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:14:50.765 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:50.765 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:50.765 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:14:50.765 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:50.765 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:50.765 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:50.765 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.331 18:05:07 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:14:51.898 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:14:52.157 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:14:52.415 malloc_lvol_verify 00:14:52.415 18:05:08 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:14:52.672 69d8e5d9-4d65-4caf-9f7b-90ef4b149bef 00:14:52.672 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:14:52.930 0d32231b-959a-4cc8-85e9-1a7a816a6d09 00:14:52.930 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:14:53.188 /dev/nbd0 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:14:53.188 mke2fs 1.47.0 (5-Feb-2023) 00:14:53.188 Discarding device blocks: 0/4096 done 00:14:53.188 Creating filesystem with 4096 1k blocks and 1024 inodes 00:14:53.188 00:14:53.188 Allocating group tables: 0/1 done 00:14:53.188 Writing inode tables: 0/1 done 00:14:53.188 Creating journal (1024 blocks): done 00:14:53.188 Writing superblocks and filesystem accounting information: 0/1 done 00:14:53.188 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:53.188 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 71167 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@952 -- # '[' -z 71167 ']' 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@956 -- # kill -0 71167 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # uname 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 71167 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:53.445 killing process with pid 71167 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@970 -- # echo 'killing process with pid 71167' 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@971 -- # kill 71167 00:14:53.445 18:05:09 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@976 -- # wait 71167 00:14:54.381 18:05:10 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:14:54.381 00:14:54.381 real 0m12.745s 00:14:54.381 user 0m18.261s 00:14:54.381 sys 0m4.108s 00:14:54.381 18:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:54.381 18:05:10 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:14:54.381 ************************************ 00:14:54.381 END TEST bdev_nbd 00:14:54.381 ************************************ 00:14:54.381 18:05:10 blockdev_xnvme -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:14:54.381 18:05:10 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = nvme ']' 00:14:54.381 18:05:10 blockdev_xnvme -- bdev/blockdev.sh@763 -- # '[' xnvme = gpt ']' 00:14:54.381 18:05:10 blockdev_xnvme -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:14:54.381 18:05:10 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:54.381 18:05:10 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:54.381 18:05:10 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:54.381 ************************************ 00:14:54.381 START TEST bdev_fio 00:14:54.381 ************************************ 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1127 -- # fio_test_suite '' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:14:54.381 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=verify 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type=AIO 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z verify ']' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' verify == verify ']' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1316 -- # cat 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1325 -- # '[' AIO == AIO ']' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # /usr/src/fio/fio --version 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1326 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # echo serialize_overlap=1 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n2]' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n2 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n3]' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n3 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1103 -- # '[' 11 -le 1 ']' 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:54.381 18:05:10 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:14:54.640 ************************************ 00:14:54.640 START TEST bdev_fio_rw_verify 00:14:54.640 ************************************ 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1127 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local sanitizers 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # shift 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # local asan_lib= 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # grep libasan 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # break 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:54.640 18:05:10 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:14:54.899 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:54.899 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:54.899 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:54.899 job_nvme2n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:54.899 job_nvme2n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:54.899 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:14:54.899 fio-3.35 00:14:54.899 Starting 6 threads 00:15:07.127 00:15:07.127 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=71598: Mon Oct 28 18:05:21 2024 00:15:07.127 read: IOPS=29.3k, BW=115MiB/s (120MB/s)(1146MiB/10001msec) 00:15:07.127 slat (usec): min=2, max=774, avg= 6.83, stdev= 4.01 00:15:07.127 clat (usec): min=79, max=4797, avg=634.27, stdev=222.57 00:15:07.127 lat (usec): min=95, max=4804, avg=641.10, stdev=223.29 00:15:07.127 clat percentiles (usec): 00:15:07.127 | 50.000th=[ 660], 99.000th=[ 1123], 99.900th=[ 1663], 99.990th=[ 3621], 00:15:07.127 | 99.999th=[ 4752] 00:15:07.127 write: IOPS=29.7k, BW=116MiB/s (122MB/s)(1161MiB/10001msec); 0 zone resets 00:15:07.127 slat (usec): min=9, max=4401, avg=26.36, stdev=31.55 00:15:07.127 clat (usec): min=100, max=6724, avg=717.23, stdev=232.82 00:15:07.127 lat (usec): min=118, max=6750, avg=743.59, stdev=235.42 00:15:07.127 clat percentiles (usec): 00:15:07.127 | 50.000th=[ 734], 99.000th=[ 1303], 99.900th=[ 1827], 99.990th=[ 4359], 00:15:07.127 | 99.999th=[ 6652] 00:15:07.127 bw ( KiB/s): min=98584, max=143333, per=99.58%, avg=118395.16, stdev=2219.60, samples=114 00:15:07.127 iops : min=24646, max=35833, avg=29598.74, stdev=554.83, samples=114 00:15:07.127 lat (usec) : 100=0.01%, 250=2.63%, 500=20.72%, 750=36.90%, 1000=34.29% 00:15:07.127 lat (msec) : 2=5.39%, 4=0.05%, 10=0.01% 00:15:07.127 cpu : usr=58.94%, sys=27.35%, ctx=8421, majf=0, minf=24970 00:15:07.127 IO depths : 1=11.9%, 2=24.4%, 4=50.6%, 8=13.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:07.127 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.127 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:07.127 issued rwts: total=293290,297275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:07.127 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:07.127 00:15:07.127 Run status group 0 (all jobs): 00:15:07.127 READ: bw=115MiB/s (120MB/s), 115MiB/s-115MiB/s (120MB/s-120MB/s), io=1146MiB (1201MB), run=10001-10001msec 00:15:07.127 WRITE: bw=116MiB/s (122MB/s), 116MiB/s-116MiB/s (122MB/s-122MB/s), io=1161MiB (1218MB), run=10001-10001msec 00:15:07.127 ----------------------------------------------------- 00:15:07.127 Suppressions used: 00:15:07.127 count bytes template 00:15:07.127 6 48 /usr/src/fio/parse.c 00:15:07.127 3772 362112 /usr/src/fio/iolog.c 00:15:07.127 1 8 libtcmalloc_minimal.so 00:15:07.127 1 904 libcrypto.so 00:15:07.127 ----------------------------------------------------- 00:15:07.127 00:15:07.127 00:15:07.127 real 0m12.158s 00:15:07.127 user 0m37.061s 00:15:07.127 sys 0m16.751s 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:07.127 ************************************ 00:15:07.127 END TEST bdev_fio_rw_verify 00:15:07.127 ************************************ 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1282 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1283 -- # local workload=trim 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local bdev_type= 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local env_context= 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local fio_dir=/usr/src/fio 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1293 -- # '[' -z trim ']' 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1297 -- # '[' -n '' ']' 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1301 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # cat 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1315 -- # '[' trim == verify ']' 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1330 -- # '[' trim == trim ']' 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1331 -- # echo rw=trimwrite 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "d4a04840-0bb9-4a54-9442-f20cd87ff3cf"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "d4a04840-0bb9-4a54-9442-f20cd87ff3cf",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "2111ba86-b52d-4eb6-8ac5-8816a933774c"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "2111ba86-b52d-4eb6-8ac5-8816a933774c",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "d8577ba6-d76e-46bb-9db7-03061c1154f9"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d8577ba6-d76e-46bb-9db7-03061c1154f9",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n2",' ' "aliases": [' ' "baf09db6-7087-4a86-98ed-849324441793"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "baf09db6-7087-4a86-98ed-849324441793",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n3",' ' "aliases": [' ' "4a20c13c-1869-4a7d-ae66-e42413f70789"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "4a20c13c-1869-4a7d-ae66-e42413f70789",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "dc9c77d3-5d11-4006-87d3-eba7f353ce43"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "dc9c77d3-5d11-4006-87d3-eba7f353ce43",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:07.127 18:05:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:07.128 18:05:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:07.128 /home/vagrant/spdk_repo/spdk 00:15:07.128 18:05:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:07.128 18:05:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:07.128 18:05:23 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:07.128 00:15:07.128 real 0m12.346s 00:15:07.128 user 0m37.175s 00:15:07.128 sys 0m16.824s 00:15:07.128 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:07.128 ************************************ 00:15:07.128 END TEST bdev_fio 00:15:07.128 ************************************ 00:15:07.128 18:05:23 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:07.128 18:05:23 blockdev_xnvme -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:07.128 18:05:23 blockdev_xnvme -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:07.128 18:05:23 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:15:07.128 18:05:23 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:07.128 18:05:23 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:07.128 ************************************ 00:15:07.128 START TEST bdev_verify 00:15:07.128 ************************************ 00:15:07.128 18:05:23 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:07.128 [2024-10-28 18:05:23.278598] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:15:07.128 [2024-10-28 18:05:23.278790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71767 ] 00:15:07.128 [2024-10-28 18:05:23.465458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:07.128 [2024-10-28 18:05:23.593418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:07.128 [2024-10-28 18:05:23.593424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:07.695 Running I/O for 5 seconds... 00:15:10.005 22304.00 IOPS, 87.12 MiB/s [2024-10-28T18:05:27.417Z] 22576.00 IOPS, 88.19 MiB/s [2024-10-28T18:05:28.351Z] 23360.67 IOPS, 91.25 MiB/s [2024-10-28T18:05:29.286Z] 23361.00 IOPS, 91.25 MiB/s [2024-10-28T18:05:29.286Z] 23232.80 IOPS, 90.75 MiB/s 00:15:12.808 Latency(us) 00:15:12.808 [2024-10-28T18:05:29.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:12.808 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0x0 length 0xa0000 00:15:12.808 nvme0n1 : 5.05 1749.92 6.84 0.00 0.00 73013.01 10545.34 79119.83 00:15:12.808 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0xa0000 length 0xa0000 00:15:12.808 nvme0n1 : 5.03 1654.22 6.46 0.00 0.00 77227.40 9353.77 70540.57 00:15:12.808 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0x0 length 0xbd0bd 00:15:12.808 nvme1n1 : 5.06 3087.84 12.06 0.00 0.00 41172.47 4825.83 63867.81 00:15:12.808 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:15:12.808 nvme1n1 : 5.06 2824.06 11.03 0.00 0.00 45129.14 5540.77 63867.81 00:15:12.808 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0x0 length 0x80000 00:15:12.808 nvme2n1 : 5.05 1774.70 6.93 0.00 0.00 71696.68 7030.23 63867.81 00:15:12.808 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0x80000 length 0x80000 00:15:12.808 nvme2n1 : 5.07 1664.89 6.50 0.00 0.00 76502.72 11081.54 63391.19 00:15:12.808 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0x0 length 0x80000 00:15:12.808 nvme2n2 : 5.05 1748.76 6.83 0.00 0.00 72615.17 7804.74 59816.49 00:15:12.808 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0x80000 length 0x80000 00:15:12.808 nvme2n2 : 5.08 1663.58 6.50 0.00 0.00 76343.24 13226.36 72447.07 00:15:12.808 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0x0 length 0x80000 00:15:12.808 nvme2n3 : 5.05 1750.54 6.84 0.00 0.00 72393.35 11021.96 68157.44 00:15:12.808 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0x80000 length 0x80000 00:15:12.808 nvme2n3 : 5.07 1666.34 6.51 0.00 0.00 76039.76 9770.82 81979.58 00:15:12.808 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0x0 length 0x20000 00:15:12.808 nvme3n1 : 5.07 1768.34 6.91 0.00 0.00 71534.75 3932.16 77213.32 00:15:12.808 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:15:12.808 Verification LBA range: start 0x20000 length 0x20000 00:15:12.808 nvme3n1 : 5.08 1662.27 6.49 0.00 0.00 76090.86 8817.57 88652.33 00:15:12.808 [2024-10-28T18:05:29.286Z] =================================================================================================================== 00:15:12.808 [2024-10-28T18:05:29.286Z] Total : 23015.46 89.90 0.00 0.00 66269.60 3932.16 88652.33 00:15:13.743 00:15:13.743 real 0m6.854s 00:15:13.743 user 0m10.733s 00:15:13.743 sys 0m1.756s 00:15:13.743 18:05:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:13.743 ************************************ 00:15:13.743 END TEST bdev_verify 00:15:13.743 ************************************ 00:15:13.743 18:05:30 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:15:13.743 18:05:30 blockdev_xnvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:13.743 18:05:30 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 16 -le 1 ']' 00:15:13.743 18:05:30 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:13.743 18:05:30 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:13.743 ************************************ 00:15:13.743 START TEST bdev_verify_big_io 00:15:13.743 ************************************ 00:15:13.743 18:05:30 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:15:13.743 [2024-10-28 18:05:30.183422] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:15:13.743 [2024-10-28 18:05:30.184415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71867 ] 00:15:14.003 [2024-10-28 18:05:30.364551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:14.003 [2024-10-28 18:05:30.455196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.003 [2024-10-28 18:05:30.455199] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.570 Running I/O for 5 seconds... 00:15:20.386 832.00 IOPS, 52.00 MiB/s [2024-10-28T18:05:37.122Z] 2636.00 IOPS, 164.75 MiB/s [2024-10-28T18:05:37.122Z] 2978.67 IOPS, 186.17 MiB/s 00:15:20.644 Latency(us) 00:15:20.644 [2024-10-28T18:05:37.122Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:20.644 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:20.644 Verification LBA range: start 0x0 length 0xa000 00:15:20.644 nvme0n1 : 6.02 111.64 6.98 0.00 0.00 1083932.10 131548.63 1273543.21 00:15:20.644 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:20.644 Verification LBA range: start 0xa000 length 0xa000 00:15:20.644 nvme0n1 : 5.69 115.34 7.21 0.00 0.00 1065120.93 77689.95 1273543.21 00:15:20.644 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:20.644 Verification LBA range: start 0x0 length 0xbd0b 00:15:20.644 nvme1n1 : 6.04 156.37 9.77 0.00 0.00 769893.00 58148.31 1342177.28 00:15:20.644 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:20.644 Verification LBA range: start 0xbd0b length 0xbd0b 00:15:20.644 nvme1n1 : 5.92 151.31 9.46 0.00 0.00 784214.04 99614.72 758787.72 00:15:20.644 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:20.644 Verification LBA range: start 0x0 length 0x8000 00:15:20.644 nvme2n1 : 6.02 156.69 9.79 0.00 0.00 744035.81 98661.47 941811.90 00:15:20.644 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:20.644 Verification LBA range: start 0x8000 length 0x8000 00:15:20.644 nvme2n1 : 6.03 79.57 4.97 0.00 0.00 1473713.00 84362.71 3141915.00 00:15:20.644 Job: nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:20.644 Verification LBA range: start 0x0 length 0x8000 00:15:20.645 nvme2n2 : 6.04 103.27 6.45 0.00 0.00 1096185.08 108193.98 2638598.52 00:15:20.645 Job: nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:20.645 Verification LBA range: start 0x8000 length 0x8000 00:15:20.645 nvme2n2 : 6.02 140.86 8.80 0.00 0.00 810664.41 9413.35 1281169.22 00:15:20.645 Job: nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:20.645 Verification LBA range: start 0x0 length 0x8000 00:15:20.645 nvme2n3 : 6.03 116.78 7.30 0.00 0.00 945167.23 87222.46 2135282.04 00:15:20.645 Job: nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:20.645 Verification LBA range: start 0x8000 length 0x8000 00:15:20.645 nvme2n3 : 6.02 130.16 8.14 0.00 0.00 845719.28 89605.59 1670095.59 00:15:20.645 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:15:20.645 Verification LBA range: start 0x0 length 0x2000 00:15:20.645 nvme3n1 : 6.05 116.42 7.28 0.00 0.00 923884.96 11796.48 2242046.14 00:15:20.645 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:15:20.645 Verification LBA range: start 0x2000 length 0x2000 00:15:20.645 nvme3n1 : 6.03 131.40 8.21 0.00 0.00 820439.82 10426.18 2470826.36 00:15:20.645 [2024-10-28T18:05:37.123Z] =================================================================================================================== 00:15:20.645 [2024-10-28T18:05:37.123Z] Total : 1509.83 94.36 0.00 0.00 913618.08 9413.35 3141915.00 00:15:22.020 00:15:22.020 real 0m8.116s 00:15:22.020 user 0m14.819s 00:15:22.020 sys 0m0.497s 00:15:22.020 18:05:38 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:22.020 ************************************ 00:15:22.020 END TEST bdev_verify_big_io 00:15:22.020 ************************************ 00:15:22.020 18:05:38 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.020 18:05:38 blockdev_xnvme -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:22.020 18:05:38 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:15:22.020 18:05:38 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:22.020 18:05:38 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:22.020 ************************************ 00:15:22.020 START TEST bdev_write_zeroes 00:15:22.020 ************************************ 00:15:22.020 18:05:38 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:22.020 [2024-10-28 18:05:38.339689] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:15:22.020 [2024-10-28 18:05:38.339897] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71978 ] 00:15:22.278 [2024-10-28 18:05:38.510159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:22.278 [2024-10-28 18:05:38.600082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:22.536 Running I/O for 1 seconds... 00:15:23.910 68384.00 IOPS, 267.12 MiB/s 00:15:23.910 Latency(us) 00:15:23.910 [2024-10-28T18:05:40.388Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.910 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:23.910 nvme0n1 : 1.03 10316.29 40.30 0.00 0.00 12394.36 7149.38 26452.71 00:15:23.910 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:23.910 nvme1n1 : 1.03 16228.90 63.39 0.00 0.00 7869.63 4289.63 27286.81 00:15:23.910 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:23.910 nvme2n1 : 1.03 10301.03 40.24 0.00 0.00 12343.08 5898.24 19422.49 00:15:23.910 Job: nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:23.910 nvme2n2 : 1.03 10285.74 40.18 0.00 0.00 12351.60 6017.40 22163.08 00:15:23.910 Job: nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:23.910 nvme2n3 : 1.03 10270.95 40.12 0.00 0.00 12362.03 6225.92 24665.37 00:15:23.910 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:15:23.910 nvme3n1 : 1.04 10256.26 40.06 0.00 0.00 12368.90 6285.50 27286.81 00:15:23.910 [2024-10-28T18:05:40.388Z] =================================================================================================================== 00:15:23.910 [2024-10-28T18:05:40.388Z] Total : 67659.17 264.29 0.00 0.00 11290.37 4289.63 27286.81 00:15:24.846 00:15:24.846 real 0m2.740s 00:15:24.846 user 0m2.014s 00:15:24.846 sys 0m0.536s 00:15:24.846 18:05:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:24.846 ************************************ 00:15:24.846 END TEST bdev_write_zeroes 00:15:24.846 18:05:40 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:15:24.846 ************************************ 00:15:24.846 18:05:41 blockdev_xnvme -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:24.846 18:05:41 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:15:24.846 18:05:41 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:24.846 18:05:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:24.846 ************************************ 00:15:24.846 START TEST bdev_json_nonenclosed 00:15:24.846 ************************************ 00:15:24.846 18:05:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:24.846 [2024-10-28 18:05:41.150015] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:15:24.846 [2024-10-28 18:05:41.150197] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72027 ] 00:15:25.104 [2024-10-28 18:05:41.330786] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.104 [2024-10-28 18:05:41.413070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.104 [2024-10-28 18:05:41.413184] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:15:25.104 [2024-10-28 18:05:41.413208] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:25.104 [2024-10-28 18:05:41.413221] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:25.364 00:15:25.364 real 0m0.589s 00:15:25.364 user 0m0.350s 00:15:25.364 sys 0m0.134s 00:15:25.364 18:05:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:25.364 ************************************ 00:15:25.364 18:05:41 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:15:25.364 END TEST bdev_json_nonenclosed 00:15:25.364 ************************************ 00:15:25.364 18:05:41 blockdev_xnvme -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:25.364 18:05:41 blockdev_xnvme -- common/autotest_common.sh@1103 -- # '[' 13 -le 1 ']' 00:15:25.364 18:05:41 blockdev_xnvme -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:25.364 18:05:41 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.364 ************************************ 00:15:25.364 START TEST bdev_json_nonarray 00:15:25.364 ************************************ 00:15:25.364 18:05:41 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:15:25.364 [2024-10-28 18:05:41.788471] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:15:25.364 [2024-10-28 18:05:41.788641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72052 ] 00:15:25.623 [2024-10-28 18:05:41.969539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.623 [2024-10-28 18:05:42.063629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.623 [2024-10-28 18:05:42.063742] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:15:25.623 [2024-10-28 18:05:42.063766] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:15:25.623 [2024-10-28 18:05:42.063799] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:25.882 00:15:25.882 real 0m0.604s 00:15:25.882 user 0m0.374s 00:15:25.882 sys 0m0.125s 00:15:25.882 18:05:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:25.882 18:05:42 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:15:25.882 ************************************ 00:15:25.882 END TEST bdev_json_nonarray 00:15:25.882 ************************************ 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@786 -- # [[ xnvme == bdev ]] 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@793 -- # [[ xnvme == gpt ]] 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@797 -- # [[ xnvme == crypto_sw ]] 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@810 -- # cleanup 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:15:25.882 18:05:42 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:15:26.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:27.824 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:15:27.824 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:15:27.824 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:15:28.083 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:15:28.083 00:15:28.083 real 0m59.928s 00:15:28.083 user 1m41.902s 00:15:28.083 sys 0m28.406s 00:15:28.083 ************************************ 00:15:28.083 END TEST blockdev_xnvme 00:15:28.083 ************************************ 00:15:28.083 18:05:44 blockdev_xnvme -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:28.083 18:05:44 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.083 18:05:44 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:28.083 18:05:44 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:28.083 18:05:44 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:28.083 18:05:44 -- common/autotest_common.sh@10 -- # set +x 00:15:28.083 ************************************ 00:15:28.083 START TEST ublk 00:15:28.083 ************************************ 00:15:28.083 18:05:44 ublk -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:15:28.083 * Looking for test storage... 00:15:28.083 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:28.083 18:05:44 ublk -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:28.083 18:05:44 ublk -- common/autotest_common.sh@1691 -- # lcov --version 00:15:28.083 18:05:44 ublk -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:28.342 18:05:44 ublk -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:28.342 18:05:44 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:28.342 18:05:44 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:28.342 18:05:44 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:28.342 18:05:44 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:15:28.342 18:05:44 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:15:28.342 18:05:44 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:15:28.342 18:05:44 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:15:28.342 18:05:44 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:15:28.342 18:05:44 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:15:28.342 18:05:44 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:15:28.342 18:05:44 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:28.342 18:05:44 ublk -- scripts/common.sh@344 -- # case "$op" in 00:15:28.342 18:05:44 ublk -- scripts/common.sh@345 -- # : 1 00:15:28.342 18:05:44 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:28.342 18:05:44 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:28.342 18:05:44 ublk -- scripts/common.sh@365 -- # decimal 1 00:15:28.342 18:05:44 ublk -- scripts/common.sh@353 -- # local d=1 00:15:28.342 18:05:44 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:28.342 18:05:44 ublk -- scripts/common.sh@355 -- # echo 1 00:15:28.342 18:05:44 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:15:28.342 18:05:44 ublk -- scripts/common.sh@366 -- # decimal 2 00:15:28.342 18:05:44 ublk -- scripts/common.sh@353 -- # local d=2 00:15:28.342 18:05:44 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:28.342 18:05:44 ublk -- scripts/common.sh@355 -- # echo 2 00:15:28.342 18:05:44 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:15:28.342 18:05:44 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:28.342 18:05:44 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:28.342 18:05:44 ublk -- scripts/common.sh@368 -- # return 0 00:15:28.342 18:05:44 ublk -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:28.342 18:05:44 ublk -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:28.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.342 --rc genhtml_branch_coverage=1 00:15:28.342 --rc genhtml_function_coverage=1 00:15:28.342 --rc genhtml_legend=1 00:15:28.342 --rc geninfo_all_blocks=1 00:15:28.342 --rc geninfo_unexecuted_blocks=1 00:15:28.342 00:15:28.342 ' 00:15:28.342 18:05:44 ublk -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:28.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.342 --rc genhtml_branch_coverage=1 00:15:28.342 --rc genhtml_function_coverage=1 00:15:28.342 --rc genhtml_legend=1 00:15:28.342 --rc geninfo_all_blocks=1 00:15:28.342 --rc geninfo_unexecuted_blocks=1 00:15:28.342 00:15:28.342 ' 00:15:28.342 18:05:44 ublk -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:28.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.342 --rc genhtml_branch_coverage=1 00:15:28.342 --rc genhtml_function_coverage=1 00:15:28.342 --rc genhtml_legend=1 00:15:28.342 --rc geninfo_all_blocks=1 00:15:28.342 --rc geninfo_unexecuted_blocks=1 00:15:28.342 00:15:28.342 ' 00:15:28.342 18:05:44 ublk -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:28.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:28.342 --rc genhtml_branch_coverage=1 00:15:28.342 --rc genhtml_function_coverage=1 00:15:28.342 --rc genhtml_legend=1 00:15:28.342 --rc geninfo_all_blocks=1 00:15:28.342 --rc geninfo_unexecuted_blocks=1 00:15:28.342 00:15:28.342 ' 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:28.342 18:05:44 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:28.342 18:05:44 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:28.342 18:05:44 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:28.342 18:05:44 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:28.342 18:05:44 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:28.342 18:05:44 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:28.342 18:05:44 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:28.342 18:05:44 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:15:28.342 18:05:44 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:15:28.342 18:05:44 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:28.342 18:05:44 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:28.342 18:05:44 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:28.342 ************************************ 00:15:28.342 START TEST test_save_ublk_config 00:15:28.343 ************************************ 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- common/autotest_common.sh@1127 -- # test_save_config 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=72344 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 72344 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72344 ']' 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:28.343 18:05:44 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:28.343 [2024-10-28 18:05:44.799783] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:15:28.343 [2024-10-28 18:05:44.800243] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72344 ] 00:15:28.600 [2024-10-28 18:05:44.986560] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:28.859 [2024-10-28 18:05:45.112945] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.425 18:05:45 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:29.425 18:05:45 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:15:29.425 18:05:45 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:15:29.425 18:05:45 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:15:29.425 18:05:45 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.425 18:05:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:29.425 [2024-10-28 18:05:45.888962] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:29.425 [2024-10-28 18:05:45.889988] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:29.683 malloc0 00:15:29.683 [2024-10-28 18:05:45.957087] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:29.683 [2024-10-28 18:05:45.957216] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:29.683 [2024-10-28 18:05:45.957232] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:29.683 [2024-10-28 18:05:45.957240] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:29.683 [2024-10-28 18:05:45.964987] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:29.683 [2024-10-28 18:05:45.965016] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:29.683 [2024-10-28 18:05:45.971940] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:29.683 [2024-10-28 18:05:45.972048] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:29.683 [2024-10-28 18:05:45.994896] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:29.683 0 00:15:29.683 18:05:45 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.683 18:05:45 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:15:29.683 18:05:45 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.683 18:05:45 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:29.942 18:05:46 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.942 18:05:46 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:15:29.942 "subsystems": [ 00:15:29.942 { 00:15:29.942 "subsystem": "fsdev", 00:15:29.942 "config": [ 00:15:29.942 { 00:15:29.942 "method": "fsdev_set_opts", 00:15:29.942 "params": { 00:15:29.942 "fsdev_io_pool_size": 65535, 00:15:29.942 "fsdev_io_cache_size": 256 00:15:29.943 } 00:15:29.943 } 00:15:29.943 ] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "keyring", 00:15:29.943 "config": [] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "iobuf", 00:15:29.943 "config": [ 00:15:29.943 { 00:15:29.943 "method": "iobuf_set_options", 00:15:29.943 "params": { 00:15:29.943 "small_pool_count": 8192, 00:15:29.943 "large_pool_count": 1024, 00:15:29.943 "small_bufsize": 8192, 00:15:29.943 "large_bufsize": 135168, 00:15:29.943 "enable_numa": false 00:15:29.943 } 00:15:29.943 } 00:15:29.943 ] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "sock", 00:15:29.943 "config": [ 00:15:29.943 { 00:15:29.943 "method": "sock_set_default_impl", 00:15:29.943 "params": { 00:15:29.943 "impl_name": "posix" 00:15:29.943 } 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "method": "sock_impl_set_options", 00:15:29.943 "params": { 00:15:29.943 "impl_name": "ssl", 00:15:29.943 "recv_buf_size": 4096, 00:15:29.943 "send_buf_size": 4096, 00:15:29.943 "enable_recv_pipe": true, 00:15:29.943 "enable_quickack": false, 00:15:29.943 "enable_placement_id": 0, 00:15:29.943 "enable_zerocopy_send_server": true, 00:15:29.943 "enable_zerocopy_send_client": false, 00:15:29.943 "zerocopy_threshold": 0, 00:15:29.943 "tls_version": 0, 00:15:29.943 "enable_ktls": false 00:15:29.943 } 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "method": "sock_impl_set_options", 00:15:29.943 "params": { 00:15:29.943 "impl_name": "posix", 00:15:29.943 "recv_buf_size": 2097152, 00:15:29.943 "send_buf_size": 2097152, 00:15:29.943 "enable_recv_pipe": true, 00:15:29.943 "enable_quickack": false, 00:15:29.943 "enable_placement_id": 0, 00:15:29.943 "enable_zerocopy_send_server": true, 00:15:29.943 "enable_zerocopy_send_client": false, 00:15:29.943 "zerocopy_threshold": 0, 00:15:29.943 "tls_version": 0, 00:15:29.943 "enable_ktls": false 00:15:29.943 } 00:15:29.943 } 00:15:29.943 ] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "vmd", 00:15:29.943 "config": [] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "accel", 00:15:29.943 "config": [ 00:15:29.943 { 00:15:29.943 "method": "accel_set_options", 00:15:29.943 "params": { 00:15:29.943 "small_cache_size": 128, 00:15:29.943 "large_cache_size": 16, 00:15:29.943 "task_count": 2048, 00:15:29.943 "sequence_count": 2048, 00:15:29.943 "buf_count": 2048 00:15:29.943 } 00:15:29.943 } 00:15:29.943 ] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "bdev", 00:15:29.943 "config": [ 00:15:29.943 { 00:15:29.943 "method": "bdev_set_options", 00:15:29.943 "params": { 00:15:29.943 "bdev_io_pool_size": 65535, 00:15:29.943 "bdev_io_cache_size": 256, 00:15:29.943 "bdev_auto_examine": true, 00:15:29.943 "iobuf_small_cache_size": 128, 00:15:29.943 "iobuf_large_cache_size": 16 00:15:29.943 } 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "method": "bdev_raid_set_options", 00:15:29.943 "params": { 00:15:29.943 "process_window_size_kb": 1024, 00:15:29.943 "process_max_bandwidth_mb_sec": 0 00:15:29.943 } 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "method": "bdev_iscsi_set_options", 00:15:29.943 "params": { 00:15:29.943 "timeout_sec": 30 00:15:29.943 } 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "method": "bdev_nvme_set_options", 00:15:29.943 "params": { 00:15:29.943 "action_on_timeout": "none", 00:15:29.943 "timeout_us": 0, 00:15:29.943 "timeout_admin_us": 0, 00:15:29.943 "keep_alive_timeout_ms": 10000, 00:15:29.943 "arbitration_burst": 0, 00:15:29.943 "low_priority_weight": 0, 00:15:29.943 "medium_priority_weight": 0, 00:15:29.943 "high_priority_weight": 0, 00:15:29.943 "nvme_adminq_poll_period_us": 10000, 00:15:29.943 "nvme_ioq_poll_period_us": 0, 00:15:29.943 "io_queue_requests": 0, 00:15:29.943 "delay_cmd_submit": true, 00:15:29.943 "transport_retry_count": 4, 00:15:29.943 "bdev_retry_count": 3, 00:15:29.943 "transport_ack_timeout": 0, 00:15:29.943 "ctrlr_loss_timeout_sec": 0, 00:15:29.943 "reconnect_delay_sec": 0, 00:15:29.943 "fast_io_fail_timeout_sec": 0, 00:15:29.943 "disable_auto_failback": false, 00:15:29.943 "generate_uuids": false, 00:15:29.943 "transport_tos": 0, 00:15:29.943 "nvme_error_stat": false, 00:15:29.943 "rdma_srq_size": 0, 00:15:29.943 "io_path_stat": false, 00:15:29.943 "allow_accel_sequence": false, 00:15:29.943 "rdma_max_cq_size": 0, 00:15:29.943 "rdma_cm_event_timeout_ms": 0, 00:15:29.943 "dhchap_digests": [ 00:15:29.943 "sha256", 00:15:29.943 "sha384", 00:15:29.943 "sha512" 00:15:29.943 ], 00:15:29.943 "dhchap_dhgroups": [ 00:15:29.943 "null", 00:15:29.943 "ffdhe2048", 00:15:29.943 "ffdhe3072", 00:15:29.943 "ffdhe4096", 00:15:29.943 "ffdhe6144", 00:15:29.943 "ffdhe8192" 00:15:29.943 ] 00:15:29.943 } 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "method": "bdev_nvme_set_hotplug", 00:15:29.943 "params": { 00:15:29.943 "period_us": 100000, 00:15:29.943 "enable": false 00:15:29.943 } 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "method": "bdev_malloc_create", 00:15:29.943 "params": { 00:15:29.943 "name": "malloc0", 00:15:29.943 "num_blocks": 8192, 00:15:29.943 "block_size": 4096, 00:15:29.943 "physical_block_size": 4096, 00:15:29.943 "uuid": "6632b0ae-f0de-4626-a138-897a4e01ba81", 00:15:29.943 "optimal_io_boundary": 0, 00:15:29.943 "md_size": 0, 00:15:29.943 "dif_type": 0, 00:15:29.943 "dif_is_head_of_md": false, 00:15:29.943 "dif_pi_format": 0 00:15:29.943 } 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "method": "bdev_wait_for_examine" 00:15:29.943 } 00:15:29.943 ] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "scsi", 00:15:29.943 "config": null 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "scheduler", 00:15:29.943 "config": [ 00:15:29.943 { 00:15:29.943 "method": "framework_set_scheduler", 00:15:29.943 "params": { 00:15:29.943 "name": "static" 00:15:29.943 } 00:15:29.943 } 00:15:29.943 ] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "vhost_scsi", 00:15:29.943 "config": [] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "vhost_blk", 00:15:29.943 "config": [] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "ublk", 00:15:29.943 "config": [ 00:15:29.943 { 00:15:29.943 "method": "ublk_create_target", 00:15:29.943 "params": { 00:15:29.943 "cpumask": "1" 00:15:29.943 } 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "method": "ublk_start_disk", 00:15:29.943 "params": { 00:15:29.943 "bdev_name": "malloc0", 00:15:29.943 "ublk_id": 0, 00:15:29.943 "num_queues": 1, 00:15:29.943 "queue_depth": 128 00:15:29.943 } 00:15:29.943 } 00:15:29.943 ] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "nbd", 00:15:29.943 "config": [] 00:15:29.943 }, 00:15:29.943 { 00:15:29.943 "subsystem": "nvmf", 00:15:29.943 "config": [ 00:15:29.943 { 00:15:29.943 "method": "nvmf_set_config", 00:15:29.943 "params": { 00:15:29.943 "discovery_filter": "match_any", 00:15:29.943 "admin_cmd_passthru": { 00:15:29.943 "identify_ctrlr": false 00:15:29.943 }, 00:15:29.943 "dhchap_digests": [ 00:15:29.943 "sha256", 00:15:29.943 "sha384", 00:15:29.943 "sha512" 00:15:29.944 ], 00:15:29.944 "dhchap_dhgroups": [ 00:15:29.944 "null", 00:15:29.944 "ffdhe2048", 00:15:29.944 "ffdhe3072", 00:15:29.944 "ffdhe4096", 00:15:29.944 "ffdhe6144", 00:15:29.944 "ffdhe8192" 00:15:29.944 ] 00:15:29.944 } 00:15:29.944 }, 00:15:29.944 { 00:15:29.944 "method": "nvmf_set_max_subsystems", 00:15:29.944 "params": { 00:15:29.944 "max_subsystems": 1024 00:15:29.944 } 00:15:29.944 }, 00:15:29.944 { 00:15:29.944 "method": "nvmf_set_crdt", 00:15:29.944 "params": { 00:15:29.944 "crdt1": 0, 00:15:29.944 "crdt2": 0, 00:15:29.944 "crdt3": 0 00:15:29.944 } 00:15:29.944 } 00:15:29.944 ] 00:15:29.944 }, 00:15:29.944 { 00:15:29.944 "subsystem": "iscsi", 00:15:29.944 "config": [ 00:15:29.944 { 00:15:29.944 "method": "iscsi_set_options", 00:15:29.944 "params": { 00:15:29.944 "node_base": "iqn.2016-06.io.spdk", 00:15:29.944 "max_sessions": 128, 00:15:29.944 "max_connections_per_session": 2, 00:15:29.944 "max_queue_depth": 64, 00:15:29.944 "default_time2wait": 2, 00:15:29.944 "default_time2retain": 20, 00:15:29.944 "first_burst_length": 8192, 00:15:29.944 "immediate_data": true, 00:15:29.944 "allow_duplicated_isid": false, 00:15:29.944 "error_recovery_level": 0, 00:15:29.944 "nop_timeout": 60, 00:15:29.944 "nop_in_interval": 30, 00:15:29.944 "disable_chap": false, 00:15:29.944 "require_chap": false, 00:15:29.944 "mutual_chap": false, 00:15:29.944 "chap_group": 0, 00:15:29.944 "max_large_datain_per_connection": 64, 00:15:29.944 "max_r2t_per_connection": 4, 00:15:29.944 "pdu_pool_size": 36864, 00:15:29.944 "immediate_data_pool_size": 16384, 00:15:29.944 "data_out_pool_size": 2048 00:15:29.944 } 00:15:29.944 } 00:15:29.944 ] 00:15:29.944 } 00:15:29.944 ] 00:15:29.944 }' 00:15:29.944 18:05:46 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 72344 00:15:29.944 18:05:46 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72344 ']' 00:15:29.944 18:05:46 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72344 00:15:29.944 18:05:46 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:15:29.944 18:05:46 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:29.944 18:05:46 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72344 00:15:29.944 18:05:46 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:29.944 18:05:46 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:29.944 killing process with pid 72344 00:15:29.944 18:05:46 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72344' 00:15:29.944 18:05:46 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72344 00:15:29.944 18:05:46 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72344 00:15:31.317 [2024-10-28 18:05:47.468580] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:31.317 [2024-10-28 18:05:47.501919] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:31.317 [2024-10-28 18:05:47.502098] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:31.317 [2024-10-28 18:05:47.511959] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:31.317 [2024-10-28 18:05:47.512018] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:31.317 [2024-10-28 18:05:47.512037] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:31.317 [2024-10-28 18:05:47.512075] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:31.317 [2024-10-28 18:05:47.512255] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:32.691 18:05:49 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=72404 00:15:32.691 18:05:49 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 72404 00:15:32.691 18:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@833 -- # '[' -z 72404 ']' 00:15:32.691 18:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:32.691 18:05:49 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:15:32.691 18:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:32.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:32.691 18:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:32.691 18:05:49 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:15:32.691 "subsystems": [ 00:15:32.691 { 00:15:32.691 "subsystem": "fsdev", 00:15:32.691 "config": [ 00:15:32.691 { 00:15:32.691 "method": "fsdev_set_opts", 00:15:32.691 "params": { 00:15:32.691 "fsdev_io_pool_size": 65535, 00:15:32.691 "fsdev_io_cache_size": 256 00:15:32.691 } 00:15:32.691 } 00:15:32.691 ] 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "subsystem": "keyring", 00:15:32.691 "config": [] 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "subsystem": "iobuf", 00:15:32.691 "config": [ 00:15:32.691 { 00:15:32.691 "method": "iobuf_set_options", 00:15:32.691 "params": { 00:15:32.691 "small_pool_count": 8192, 00:15:32.691 "large_pool_count": 1024, 00:15:32.691 "small_bufsize": 8192, 00:15:32.691 "large_bufsize": 135168, 00:15:32.691 "enable_numa": false 00:15:32.691 } 00:15:32.691 } 00:15:32.691 ] 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "subsystem": "sock", 00:15:32.691 "config": [ 00:15:32.691 { 00:15:32.691 "method": "sock_set_default_impl", 00:15:32.691 "params": { 00:15:32.691 "impl_name": "posix" 00:15:32.691 } 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "method": "sock_impl_set_options", 00:15:32.691 "params": { 00:15:32.691 "impl_name": "ssl", 00:15:32.691 "recv_buf_size": 4096, 00:15:32.691 "send_buf_size": 4096, 00:15:32.691 "enable_recv_pipe": true, 00:15:32.691 "enable_quickack": false, 00:15:32.691 "enable_placement_id": 0, 00:15:32.691 "enable_zerocopy_send_server": true, 00:15:32.691 "enable_zerocopy_send_client": false, 00:15:32.691 "zerocopy_threshold": 0, 00:15:32.691 "tls_version": 0, 00:15:32.691 "enable_ktls": false 00:15:32.691 } 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "method": "sock_impl_set_options", 00:15:32.691 "params": { 00:15:32.691 "impl_name": "posix", 00:15:32.691 "recv_buf_size": 2097152, 00:15:32.691 "send_buf_size": 2097152, 00:15:32.691 "enable_recv_pipe": true, 00:15:32.691 "enable_quickack": false, 00:15:32.691 "enable_placement_id": 0, 00:15:32.691 "enable_zerocopy_send_server": true, 00:15:32.691 "enable_zerocopy_send_client": false, 00:15:32.691 "zerocopy_threshold": 0, 00:15:32.691 "tls_version": 0, 00:15:32.691 "enable_ktls": false 00:15:32.691 } 00:15:32.691 } 00:15:32.691 ] 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "subsystem": "vmd", 00:15:32.691 "config": [] 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "subsystem": "accel", 00:15:32.691 "config": [ 00:15:32.691 { 00:15:32.691 "method": "accel_set_options", 00:15:32.691 "params": { 00:15:32.691 "small_cache_size": 128, 00:15:32.691 "large_cache_size": 16, 00:15:32.691 "task_count": 2048, 00:15:32.691 "sequence_count": 2048, 00:15:32.691 "buf_count": 2048 00:15:32.691 } 00:15:32.691 } 00:15:32.691 ] 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "subsystem": "bdev", 00:15:32.691 "config": [ 00:15:32.691 { 00:15:32.691 "method": "bdev_set_options", 00:15:32.691 "params": { 00:15:32.691 "bdev_io_pool_size": 65535, 00:15:32.691 "bdev_io_cache_size": 256, 00:15:32.691 "bdev_auto_examine": true, 00:15:32.691 "iobuf_small_cache_size": 128, 00:15:32.691 "iobuf_large_cache_size": 16 00:15:32.691 } 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "method": "bdev_raid_set_options", 00:15:32.691 "params": { 00:15:32.691 "process_window_size_kb": 1024, 00:15:32.691 "process_max_bandwidth_mb_sec": 0 00:15:32.691 } 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "method": "bdev_iscsi_set_options", 00:15:32.691 "params": { 00:15:32.691 "timeout_sec": 30 00:15:32.691 } 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "method": "bdev_nvme_set_options", 00:15:32.691 "params": { 00:15:32.691 "action_on_timeout": "none", 00:15:32.691 "timeout_us": 0, 00:15:32.691 "timeout_admin_us": 0, 00:15:32.691 "keep_alive_timeout_ms": 10000, 00:15:32.691 "arbitration_burst": 0, 00:15:32.691 "low_priority_weight": 0, 00:15:32.691 "medium_priority_weight": 0, 00:15:32.691 "high_priority_weight": 0, 00:15:32.691 "nvme_adminq_poll_period_us": 10000, 00:15:32.691 "nvme_ioq_poll_period_us": 0, 00:15:32.691 "io_queue_requests": 0, 00:15:32.691 "delay_cmd_submit": true, 00:15:32.691 "transport_retry_count": 4, 00:15:32.691 "bdev_retry_count": 3, 00:15:32.691 "transport_ack_timeout": 0, 00:15:32.691 "ctrlr_loss_timeout_sec": 0, 00:15:32.691 "reconnect_delay_sec": 0, 00:15:32.691 "fast_io_fail_timeout_sec": 0, 00:15:32.691 "disable_auto_failback": false, 00:15:32.691 "generate_uuids": false, 00:15:32.691 "transport_tos": 0, 00:15:32.691 "nvme_error_stat": false, 00:15:32.691 "rdma_srq_size": 0, 00:15:32.691 "io_path_stat": false, 00:15:32.691 "allow_accel_sequence": false, 00:15:32.691 "rdma_max_cq_size": 0, 00:15:32.691 "rdma_cm_event_timeout_ms": 0, 00:15:32.691 "dhchap_digests": [ 00:15:32.691 "sha256", 00:15:32.691 "sha384", 00:15:32.691 "sha512" 00:15:32.691 ], 00:15:32.691 "dhchap_dhgroups": [ 00:15:32.691 "null", 00:15:32.691 "ffdhe2048", 00:15:32.691 "ffdhe3072", 00:15:32.691 "ffdhe4096", 00:15:32.691 "ffdhe6144", 00:15:32.691 "ffdhe8192" 00:15:32.691 ] 00:15:32.691 } 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "method": "bdev_nvme_set_hotplug", 00:15:32.691 "params": { 00:15:32.691 "period_us": 100000, 00:15:32.691 "enable": false 00:15:32.691 } 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "method": "bdev_malloc_create", 00:15:32.691 "params": { 00:15:32.691 "name": "malloc0", 00:15:32.691 "num_blocks": 8192, 00:15:32.691 "block_size": 4096, 00:15:32.691 "physical_block_size": 4096, 00:15:32.691 "uuid": "6632b0ae-f0de-4626-a138-897a4e01ba81", 00:15:32.691 "optimal_io_boundary": 0, 00:15:32.691 "md_size": 0, 00:15:32.691 "dif_type": 0, 00:15:32.691 "dif_is_head_of_md": false, 00:15:32.691 "dif_pi_format": 0 00:15:32.691 } 00:15:32.691 }, 00:15:32.691 { 00:15:32.691 "method": "bdev_wait_for_examine" 00:15:32.691 } 00:15:32.691 ] 00:15:32.691 }, 00:15:32.692 { 00:15:32.692 "subsystem": "scsi", 00:15:32.692 "config": null 00:15:32.692 }, 00:15:32.692 { 00:15:32.692 "subsystem": "scheduler", 00:15:32.692 "config": [ 00:15:32.692 { 00:15:32.692 "method": "framework_set_scheduler", 00:15:32.692 "params": { 00:15:32.692 "name": "static" 00:15:32.692 } 00:15:32.692 } 00:15:32.692 ] 00:15:32.692 }, 00:15:32.692 { 00:15:32.692 "subsystem": "vhost_scsi", 00:15:32.692 "config": [] 00:15:32.692 }, 00:15:32.692 { 00:15:32.692 "subsystem": "vhost_blk", 00:15:32.692 "config": [] 00:15:32.692 }, 00:15:32.692 { 00:15:32.692 "subsystem": "ublk", 00:15:32.692 "config": [ 00:15:32.692 { 00:15:32.692 "method": "ublk_create_target", 00:15:32.692 "params": { 00:15:32.692 "cpumask": "1" 00:15:32.692 } 00:15:32.692 }, 00:15:32.692 { 00:15:32.692 "method": "ublk_start_disk", 00:15:32.692 "params": { 00:15:32.692 "bdev_name": "malloc0", 00:15:32.692 "ublk_id": 0, 00:15:32.692 "num_queues": 1, 00:15:32.692 "queue_depth": 128 00:15:32.692 } 00:15:32.692 } 00:15:32.692 ] 00:15:32.692 }, 00:15:32.692 { 00:15:32.692 "subsystem": "nbd", 00:15:32.692 "config": [] 00:15:32.692 }, 00:15:32.692 { 00:15:32.692 "subsystem": "nvmf", 00:15:32.692 "config": [ 00:15:32.692 { 00:15:32.692 "method": "nvmf_set_config", 00:15:32.692 "params": { 00:15:32.692 "discovery_filter": "match_any", 00:15:32.692 "admin_cmd_passthru": { 00:15:32.692 "identify_ctrlr": false 00:15:32.692 }, 00:15:32.692 "dhchap_digests": [ 00:15:32.692 "sha256", 00:15:32.692 "sha384", 00:15:32.692 "sha512" 00:15:32.692 ], 00:15:32.692 "dhchap_dhgroups": [ 00:15:32.692 "null", 00:15:32.692 "ffdhe2048", 00:15:32.692 "ffdhe3072", 00:15:32.692 "ffdhe4096", 00:15:32.692 "ffdhe6144", 00:15:32.692 "ffdhe8192" 00:15:32.692 ] 00:15:32.692 } 00:15:32.692 }, 00:15:32.692 { 00:15:32.692 "method": "nvmf_set_max_subsystems", 00:15:32.692 "params": { 00:15:32.692 "max_subsystems": 1024 00:15:32.692 } 00:15:32.692 }, 00:15:32.692 { 00:15:32.692 "method": "nvmf_set_crdt", 00:15:32.692 "params": { 00:15:32.692 "crdt1": 0, 00:15:32.692 "crdt2": 0, 00:15:32.692 "crdt3": 0 00:15:32.692 } 00:15:32.692 } 00:15:32.692 ] 00:15:32.692 }, 00:15:32.692 { 00:15:32.692 "subsystem": "iscsi", 00:15:32.692 "config": [ 00:15:32.692 { 00:15:32.692 "method": "iscsi_set_options", 00:15:32.692 "params": { 00:15:32.692 "node_base": "iqn.2016-06.io.spdk", 00:15:32.692 "max_sessions": 128, 00:15:32.692 "max_connections_per_session": 2, 00:15:32.692 "max_queue_depth": 64, 00:15:32.692 "default_time2wait": 2, 00:15:32.692 "default_time2retain": 20, 00:15:32.692 "first_burst_length": 8192, 00:15:32.692 "immediate_data": true, 00:15:32.692 "allow_duplicated_isid": false, 00:15:32.692 "error_recovery_level": 0, 00:15:32.692 "nop_timeout": 60, 00:15:32.692 "nop_in_interval": 30, 00:15:32.692 "disable_chap": false, 00:15:32.692 "require_chap": false, 00:15:32.692 "mutual_chap": false, 00:15:32.692 "chap_group": 0, 00:15:32.692 "max_large_datain_per_connection": 64, 00:15:32.692 "max_r2t_per_connection": 4, 00:15:32.692 "pdu_pool_size": 36864, 00:15:32.692 "immediate_data_pool_size": 16384, 00:15:32.692 "data_out_pool_size": 2048 00:15:32.692 } 00:15:32.692 } 00:15:32.692 ] 00:15:32.692 } 00:15:32.692 ] 00:15:32.692 }' 00:15:32.692 18:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:32.692 18:05:49 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:32.950 [2024-10-28 18:05:49.182665] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:15:32.950 [2024-10-28 18:05:49.182886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72404 ] 00:15:32.950 [2024-10-28 18:05:49.361861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.208 [2024-10-28 18:05:49.449525] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:34.141 [2024-10-28 18:05:50.285593] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:34.141 [2024-10-28 18:05:50.286797] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:34.141 [2024-10-28 18:05:50.295091] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:15:34.141 [2024-10-28 18:05:50.295206] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:15:34.141 [2024-10-28 18:05:50.295254] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:34.142 [2024-10-28 18:05:50.295277] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:34.142 [2024-10-28 18:05:50.301990] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:34.142 [2024-10-28 18:05:50.302019] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:34.142 [2024-10-28 18:05:50.309979] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:34.142 [2024-10-28 18:05:50.310107] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:34.142 [2024-10-28 18:05:50.329938] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@866 -- # return 0 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 72404 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@952 -- # '[' -z 72404 ']' 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@956 -- # kill -0 72404 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # uname 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72404 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:34.142 killing process with pid 72404 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72404' 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@971 -- # kill 72404 00:15:34.142 18:05:50 ublk.test_save_ublk_config -- common/autotest_common.sh@976 -- # wait 72404 00:15:35.515 [2024-10-28 18:05:51.711774] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:35.515 [2024-10-28 18:05:51.739933] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:35.515 [2024-10-28 18:05:51.740087] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:35.515 [2024-10-28 18:05:51.748000] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:35.515 [2024-10-28 18:05:51.748085] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:35.515 [2024-10-28 18:05:51.748099] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:35.515 [2024-10-28 18:05:51.748164] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:35.515 [2024-10-28 18:05:51.748377] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:36.888 18:05:53 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:15:36.888 00:15:36.888 real 0m8.599s 00:15:36.888 user 0m6.661s 00:15:36.888 sys 0m2.881s 00:15:36.888 18:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:36.888 18:05:53 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:15:36.888 ************************************ 00:15:36.888 END TEST test_save_ublk_config 00:15:36.888 ************************************ 00:15:36.888 18:05:53 ublk -- ublk/ublk.sh@139 -- # spdk_pid=72478 00:15:36.888 18:05:53 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:36.888 18:05:53 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:36.888 18:05:53 ublk -- ublk/ublk.sh@141 -- # waitforlisten 72478 00:15:36.888 18:05:53 ublk -- common/autotest_common.sh@833 -- # '[' -z 72478 ']' 00:15:36.888 18:05:53 ublk -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:36.888 18:05:53 ublk -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:36.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:36.888 18:05:53 ublk -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:36.888 18:05:53 ublk -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:36.888 18:05:53 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:37.146 [2024-10-28 18:05:53.437346] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:15:37.146 [2024-10-28 18:05:53.437569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72478 ] 00:15:37.146 [2024-10-28 18:05:53.620177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:37.404 [2024-10-28 18:05:53.713430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:37.404 [2024-10-28 18:05:53.713447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:37.970 18:05:54 ublk -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:37.970 18:05:54 ublk -- common/autotest_common.sh@866 -- # return 0 00:15:37.970 18:05:54 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:15:37.970 18:05:54 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:37.970 18:05:54 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:37.970 18:05:54 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:37.970 ************************************ 00:15:37.970 START TEST test_create_ublk 00:15:37.970 ************************************ 00:15:37.970 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@1127 -- # test_create_ublk 00:15:37.970 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:15:37.970 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.970 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:37.970 [2024-10-28 18:05:54.405997] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:37.970 [2024-10-28 18:05:54.408319] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:37.970 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.970 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:15:37.970 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:15:37.970 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.970 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.229 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.229 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:15:38.229 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:38.229 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.229 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.229 [2024-10-28 18:05:54.642101] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:38.229 [2024-10-28 18:05:54.642587] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:38.229 [2024-10-28 18:05:54.642625] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:38.229 [2024-10-28 18:05:54.642635] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:38.229 [2024-10-28 18:05:54.646295] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:38.229 [2024-10-28 18:05:54.646322] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:38.229 [2024-10-28 18:05:54.655917] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:38.229 [2024-10-28 18:05:54.672967] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:38.229 [2024-10-28 18:05:54.685035] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:38.229 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.229 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:15:38.229 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:15:38.229 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:15:38.229 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.229 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:38.487 18:05:54 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:15:38.487 { 00:15:38.487 "ublk_device": "/dev/ublkb0", 00:15:38.487 "id": 0, 00:15:38.487 "queue_depth": 512, 00:15:38.487 "num_queues": 4, 00:15:38.487 "bdev_name": "Malloc0" 00:15:38.487 } 00:15:38.487 ]' 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:38.487 18:05:54 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:15:38.487 18:05:54 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:15:38.487 18:05:54 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:15:38.487 18:05:54 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:15:38.487 18:05:54 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:15:38.487 18:05:54 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:15:38.487 18:05:54 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:15:38.487 18:05:54 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:15:38.487 18:05:54 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:15:38.487 18:05:54 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:38.487 18:05:54 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:15:38.487 18:05:54 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:15:38.751 fio: verification read phase will never start because write phase uses all of runtime 00:15:38.751 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:15:38.751 fio-3.35 00:15:38.751 Starting 1 process 00:15:48.748 00:15:48.748 fio_test: (groupid=0, jobs=1): err= 0: pid=72529: Mon Oct 28 18:06:05 2024 00:15:48.748 write: IOPS=12.6k, BW=49.2MiB/s (51.6MB/s)(492MiB/10001msec); 0 zone resets 00:15:48.748 clat (usec): min=46, max=4191, avg=77.96, stdev=124.01 00:15:48.748 lat (usec): min=47, max=4211, avg=78.71, stdev=124.03 00:15:48.748 clat percentiles (usec): 00:15:48.748 | 1.00th=[ 53], 5.00th=[ 60], 10.00th=[ 62], 20.00th=[ 64], 00:15:48.748 | 30.00th=[ 65], 40.00th=[ 67], 50.00th=[ 68], 60.00th=[ 70], 00:15:48.748 | 70.00th=[ 74], 80.00th=[ 81], 90.00th=[ 90], 95.00th=[ 98], 00:15:48.748 | 99.00th=[ 118], 99.50th=[ 130], 99.90th=[ 2573], 99.95th=[ 3064], 00:15:48.748 | 99.99th=[ 3752] 00:15:48.748 bw ( KiB/s): min=48016, max=53928, per=100.00%, avg=50367.16, stdev=1594.26, samples=19 00:15:48.748 iops : min=12004, max=13482, avg=12591.79, stdev=398.56, samples=19 00:15:48.748 lat (usec) : 50=0.02%, 100=95.97%, 250=3.69%, 500=0.01%, 750=0.02% 00:15:48.748 lat (usec) : 1000=0.03% 00:15:48.748 lat (msec) : 2=0.10%, 4=0.16%, 10=0.01% 00:15:48.748 cpu : usr=3.12%, sys=9.25%, ctx=125880, majf=0, minf=795 00:15:48.748 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:48.748 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.748 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:48.748 issued rwts: total=0,125876,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:48.748 latency : target=0, window=0, percentile=100.00%, depth=1 00:15:48.748 00:15:48.748 Run status group 0 (all jobs): 00:15:48.748 WRITE: bw=49.2MiB/s (51.6MB/s), 49.2MiB/s-49.2MiB/s (51.6MB/s-51.6MB/s), io=492MiB (516MB), run=10001-10001msec 00:15:48.748 00:15:48.748 Disk stats (read/write): 00:15:48.749 ublkb0: ios=0/124584, merge=0/0, ticks=0/8811, in_queue=8811, util=99.11% 00:15:48.749 18:06:05 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:15:48.749 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:48.749 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:48.749 [2024-10-28 18:06:05.198279] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:49.006 [2024-10-28 18:06:05.242943] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:49.006 [2024-10-28 18:06:05.243869] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:49.006 [2024-10-28 18:06:05.249955] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:49.006 [2024-10-28 18:06:05.250276] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:49.006 [2024-10-28 18:06:05.250299] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.006 18:06:05 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@650 -- # local es=0 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # rpc_cmd ublk_stop_disk 0 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.006 [2024-10-28 18:06:05.266055] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:15:49.006 request: 00:15:49.006 { 00:15:49.006 "ublk_id": 0, 00:15:49.006 "method": "ublk_stop_disk", 00:15:49.006 "req_id": 1 00:15:49.006 } 00:15:49.006 Got JSON-RPC error response 00:15:49.006 response: 00:15:49.006 { 00:15:49.006 "code": -19, 00:15:49.006 "message": "No such device" 00:15:49.006 } 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@653 -- # es=1 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:49.006 18:06:05 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.006 [2024-10-28 18:06:05.280971] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:49.006 [2024-10-28 18:06:05.287928] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:49.006 [2024-10-28 18:06:05.287975] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.006 18:06:05 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.006 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.572 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.572 18:06:05 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:15:49.572 18:06:05 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:49.572 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.572 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.572 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.572 18:06:05 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:49.572 18:06:05 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:15:49.572 18:06:05 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:49.572 18:06:05 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:49.572 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.572 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.572 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.572 18:06:05 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:49.572 18:06:05 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:15:49.572 18:06:05 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:49.572 00:15:49.572 real 0m11.560s 00:15:49.572 user 0m0.747s 00:15:49.572 sys 0m1.038s 00:15:49.572 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:49.572 18:06:05 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.572 ************************************ 00:15:49.572 END TEST test_create_ublk 00:15:49.572 ************************************ 00:15:49.572 18:06:06 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:15:49.572 18:06:06 ublk -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:49.572 18:06:06 ublk -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:49.572 18:06:06 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.572 ************************************ 00:15:49.572 START TEST test_create_multi_ublk 00:15:49.572 ************************************ 00:15:49.572 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@1127 -- # test_create_multi_ublk 00:15:49.572 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:15:49.572 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.572 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:49.572 [2024-10-28 18:06:06.025933] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:49.572 [2024-10-28 18:06:06.028327] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:49.572 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.572 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:15:49.572 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:15:49.572 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:49.572 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:15:49.572 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.572 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.138 [2024-10-28 18:06:06.328101] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:15:50.138 [2024-10-28 18:06:06.328634] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:15:50.138 [2024-10-28 18:06:06.328659] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:15:50.138 [2024-10-28 18:06:06.328675] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:15:50.138 [2024-10-28 18:06:06.336008] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:50.138 [2024-10-28 18:06:06.336043] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:50.138 [2024-10-28 18:06:06.343927] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:50.138 [2024-10-28 18:06:06.344742] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:15:50.138 [2024-10-28 18:06:06.353546] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.138 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.138 [2024-10-28 18:06:06.590098] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:15:50.138 [2024-10-28 18:06:06.590653] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:15:50.138 [2024-10-28 18:06:06.590692] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:50.138 [2024-10-28 18:06:06.590703] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:50.138 [2024-10-28 18:06:06.597968] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:50.138 [2024-10-28 18:06:06.597998] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:50.138 [2024-10-28 18:06:06.605923] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:50.138 [2024-10-28 18:06:06.606699] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:50.396 [2024-10-28 18:06:06.622930] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:50.396 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.396 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:15:50.396 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:50.396 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:15:50.396 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.396 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.396 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.396 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:15:50.396 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:15:50.396 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.396 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.396 [2024-10-28 18:06:06.852040] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:15:50.396 [2024-10-28 18:06:06.852551] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:15:50.396 [2024-10-28 18:06:06.852586] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:15:50.396 [2024-10-28 18:06:06.852606] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:15:50.396 [2024-10-28 18:06:06.861133] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:50.396 [2024-10-28 18:06:06.861169] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:50.396 [2024-10-28 18:06:06.871022] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:50.396 [2024-10-28 18:06:06.871775] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:15:50.654 [2024-10-28 18:06:06.887943] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:15:50.654 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.654 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:15:50.654 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:50.654 18:06:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:15:50.654 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.654 18:06:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.654 18:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.654 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:15:50.654 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:15:50.654 18:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.654 18:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.654 [2024-10-28 18:06:07.120117] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:15:50.654 [2024-10-28 18:06:07.120632] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:15:50.654 [2024-10-28 18:06:07.120660] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:15:50.654 [2024-10-28 18:06:07.120671] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:15:50.654 [2024-10-28 18:06:07.129271] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:50.654 [2024-10-28 18:06:07.129300] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:50.912 [2024-10-28 18:06:07.134984] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:50.912 [2024-10-28 18:06:07.135739] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:15:50.912 [2024-10-28 18:06:07.141097] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:15:50.912 { 00:15:50.912 "ublk_device": "/dev/ublkb0", 00:15:50.912 "id": 0, 00:15:50.912 "queue_depth": 512, 00:15:50.912 "num_queues": 4, 00:15:50.912 "bdev_name": "Malloc0" 00:15:50.912 }, 00:15:50.912 { 00:15:50.912 "ublk_device": "/dev/ublkb1", 00:15:50.912 "id": 1, 00:15:50.912 "queue_depth": 512, 00:15:50.912 "num_queues": 4, 00:15:50.912 "bdev_name": "Malloc1" 00:15:50.912 }, 00:15:50.912 { 00:15:50.912 "ublk_device": "/dev/ublkb2", 00:15:50.912 "id": 2, 00:15:50.912 "queue_depth": 512, 00:15:50.912 "num_queues": 4, 00:15:50.912 "bdev_name": "Malloc2" 00:15:50.912 }, 00:15:50.912 { 00:15:50.912 "ublk_device": "/dev/ublkb3", 00:15:50.912 "id": 3, 00:15:50.912 "queue_depth": 512, 00:15:50.912 "num_queues": 4, 00:15:50.912 "bdev_name": "Malloc3" 00:15:50.912 } 00:15:50.912 ]' 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:50.912 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:51.170 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:15:51.428 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:15:51.428 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:51.428 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:15:51.428 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:15:51.428 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:15:51.428 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:15:51.428 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:15:51.428 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:51.428 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:15:51.428 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:51.428 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:15:51.687 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:15:51.687 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:51.687 18:06:07 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:15:51.687 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:15:51.687 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:15:51.687 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:15:51.687 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:15:51.687 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:15:51.687 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:15:51.687 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:15:51.687 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:15:51.945 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:15:51.945 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:15:51.945 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:15:51.945 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:51.945 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:15:51.945 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.945 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.945 [2024-10-28 18:06:08.217229] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:15:51.945 [2024-10-28 18:06:08.251247] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:51.945 [2024-10-28 18:06:08.252671] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:15:51.945 [2024-10-28 18:06:08.258007] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:51.945 [2024-10-28 18:06:08.258426] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:15:51.945 [2024-10-28 18:06:08.258451] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:15:51.945 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.945 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:51.945 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.946 [2024-10-28 18:06:08.266035] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:15:51.946 [2024-10-28 18:06:08.299314] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:51.946 [2024-10-28 18:06:08.300474] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:15:51.946 [2024-10-28 18:06:08.313968] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:51.946 [2024-10-28 18:06:08.314322] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:15:51.946 [2024-10-28 18:06:08.314343] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.946 [2024-10-28 18:06:08.318078] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:15:51.946 [2024-10-28 18:06:08.353947] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:51.946 [2024-10-28 18:06:08.354940] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:15:51.946 [2024-10-28 18:06:08.361970] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:51.946 [2024-10-28 18:06:08.362301] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:15:51.946 [2024-10-28 18:06:08.362325] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:51.946 [2024-10-28 18:06:08.370057] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:15:51.946 [2024-10-28 18:06:08.400291] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:15:51.946 [2024-10-28 18:06:08.401409] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:15:51.946 [2024-10-28 18:06:08.409973] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:15:51.946 [2024-10-28 18:06:08.410362] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:15:51.946 [2024-10-28 18:06:08.410386] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.946 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:15:52.512 [2024-10-28 18:06:08.698044] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:52.512 [2024-10-28 18:06:08.705947] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:52.512 [2024-10-28 18:06:08.705993] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:15:52.512 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:15:52.512 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:52.512 18:06:08 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:15:52.512 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.512 18:06:08 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.079 18:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.079 18:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:53.079 18:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:15:53.079 18:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.079 18:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.337 18:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.337 18:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:53.337 18:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:15:53.337 18:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.337 18:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.594 18:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.594 18:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:15:53.594 18:06:09 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:15:53.594 18:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.594 18:06:09 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:15:53.853 ************************************ 00:15:53.853 END TEST test_create_multi_ublk 00:15:53.853 ************************************ 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:15:53.853 00:15:53.853 real 0m4.266s 00:15:53.853 user 0m1.348s 00:15:53.853 sys 0m0.159s 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:53.853 18:06:10 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:15:53.853 18:06:10 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:15:53.853 18:06:10 ublk -- ublk/ublk.sh@147 -- # cleanup 00:15:53.853 18:06:10 ublk -- ublk/ublk.sh@130 -- # killprocess 72478 00:15:53.853 18:06:10 ublk -- common/autotest_common.sh@952 -- # '[' -z 72478 ']' 00:15:53.853 18:06:10 ublk -- common/autotest_common.sh@956 -- # kill -0 72478 00:15:53.853 18:06:10 ublk -- common/autotest_common.sh@957 -- # uname 00:15:53.853 18:06:10 ublk -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:54.111 18:06:10 ublk -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 72478 00:15:54.111 killing process with pid 72478 00:15:54.111 18:06:10 ublk -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:54.111 18:06:10 ublk -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:54.111 18:06:10 ublk -- common/autotest_common.sh@970 -- # echo 'killing process with pid 72478' 00:15:54.111 18:06:10 ublk -- common/autotest_common.sh@971 -- # kill 72478 00:15:54.111 18:06:10 ublk -- common/autotest_common.sh@976 -- # wait 72478 00:15:54.731 [2024-10-28 18:06:11.195134] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:15:54.731 [2024-10-28 18:06:11.195212] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:15:56.107 00:15:56.107 real 0m27.732s 00:15:56.107 user 0m40.444s 00:15:56.107 sys 0m10.089s 00:15:56.107 18:06:12 ublk -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:56.107 ************************************ 00:15:56.107 END TEST ublk 00:15:56.107 18:06:12 ublk -- common/autotest_common.sh@10 -- # set +x 00:15:56.107 ************************************ 00:15:56.107 18:06:12 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:56.107 18:06:12 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:15:56.107 18:06:12 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:56.107 18:06:12 -- common/autotest_common.sh@10 -- # set +x 00:15:56.107 ************************************ 00:15:56.107 START TEST ublk_recovery 00:15:56.107 ************************************ 00:15:56.107 18:06:12 ublk_recovery -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:15:56.107 * Looking for test storage... 00:15:56.107 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:15:56.107 18:06:12 ublk_recovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:56.107 18:06:12 ublk_recovery -- common/autotest_common.sh@1691 -- # lcov --version 00:15:56.107 18:06:12 ublk_recovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:56.107 18:06:12 ublk_recovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:56.107 18:06:12 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:15:56.107 18:06:12 ublk_recovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:56.107 18:06:12 ublk_recovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:56.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.107 --rc genhtml_branch_coverage=1 00:15:56.107 --rc genhtml_function_coverage=1 00:15:56.107 --rc genhtml_legend=1 00:15:56.107 --rc geninfo_all_blocks=1 00:15:56.107 --rc geninfo_unexecuted_blocks=1 00:15:56.107 00:15:56.107 ' 00:15:56.107 18:06:12 ublk_recovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:56.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.107 --rc genhtml_branch_coverage=1 00:15:56.107 --rc genhtml_function_coverage=1 00:15:56.108 --rc genhtml_legend=1 00:15:56.108 --rc geninfo_all_blocks=1 00:15:56.108 --rc geninfo_unexecuted_blocks=1 00:15:56.108 00:15:56.108 ' 00:15:56.108 18:06:12 ublk_recovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:56.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.108 --rc genhtml_branch_coverage=1 00:15:56.108 --rc genhtml_function_coverage=1 00:15:56.108 --rc genhtml_legend=1 00:15:56.108 --rc geninfo_all_blocks=1 00:15:56.108 --rc geninfo_unexecuted_blocks=1 00:15:56.108 00:15:56.108 ' 00:15:56.108 18:06:12 ublk_recovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:56.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:56.108 --rc genhtml_branch_coverage=1 00:15:56.108 --rc genhtml_function_coverage=1 00:15:56.108 --rc genhtml_legend=1 00:15:56.108 --rc geninfo_all_blocks=1 00:15:56.108 --rc geninfo_unexecuted_blocks=1 00:15:56.108 00:15:56.108 ' 00:15:56.108 18:06:12 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:15:56.108 18:06:12 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:15:56.108 18:06:12 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:15:56.108 18:06:12 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:15:56.108 18:06:12 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:15:56.108 18:06:12 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:15:56.108 18:06:12 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:15:56.108 18:06:12 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:15:56.108 18:06:12 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:15:56.108 18:06:12 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:15:56.108 18:06:12 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=72892 00:15:56.108 18:06:12 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:15:56.108 18:06:12 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:56.108 18:06:12 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 72892 00:15:56.108 18:06:12 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 72892 ']' 00:15:56.108 18:06:12 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:56.108 18:06:12 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:56.108 18:06:12 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:56.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:56.108 18:06:12 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:56.108 18:06:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:56.108 [2024-10-28 18:06:12.509612] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:15:56.108 [2024-10-28 18:06:12.510024] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72892 ] 00:15:56.367 [2024-10-28 18:06:12.678112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:56.367 [2024-10-28 18:06:12.769706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.367 [2024-10-28 18:06:12.769705] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:57.301 18:06:13 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:57.301 18:06:13 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:15:57.301 18:06:13 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:15:57.301 18:06:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.301 18:06:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.301 [2024-10-28 18:06:13.487927] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:15:57.301 [2024-10-28 18:06:13.490374] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:15:57.301 18:06:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.302 18:06:13 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:15:57.302 18:06:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.302 18:06:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.302 malloc0 00:15:57.302 18:06:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.302 18:06:13 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:15:57.302 18:06:13 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.302 18:06:13 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:15:57.302 [2024-10-28 18:06:13.608071] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:15:57.302 [2024-10-28 18:06:13.608222] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:15:57.302 [2024-10-28 18:06:13.608240] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:15:57.302 [2024-10-28 18:06:13.608252] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:15:57.302 [2024-10-28 18:06:13.617096] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:15:57.302 [2024-10-28 18:06:13.617136] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:15:57.302 [2024-10-28 18:06:13.624033] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:15:57.302 [2024-10-28 18:06:13.624247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:15:57.302 [2024-10-28 18:06:13.634921] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:15:57.302 1 00:15:57.302 18:06:13 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.302 18:06:13 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:15:58.236 18:06:14 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=72927 00:15:58.236 18:06:14 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:15:58.236 18:06:14 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:15:58.494 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:58.494 fio-3.35 00:15:58.494 Starting 1 process 00:16:03.759 18:06:19 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 72892 00:16:03.759 18:06:19 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:09.030 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 72892 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:09.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.030 18:06:24 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=73033 00:16:09.030 18:06:24 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:09.030 18:06:24 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:09.030 18:06:24 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 73033 00:16:09.030 18:06:24 ublk_recovery -- common/autotest_common.sh@833 -- # '[' -z 73033 ']' 00:16:09.030 18:06:24 ublk_recovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.030 18:06:24 ublk_recovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:09.030 18:06:24 ublk_recovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.030 18:06:24 ublk_recovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:09.030 18:06:24 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.030 [2024-10-28 18:06:24.763253] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:16:09.030 [2024-10-28 18:06:24.763671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73033 ] 00:16:09.030 [2024-10-28 18:06:24.940503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:09.030 [2024-10-28 18:06:25.071533] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.030 [2024-10-28 18:06:25.071545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.597 18:06:25 ublk_recovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:09.597 18:06:25 ublk_recovery -- common/autotest_common.sh@866 -- # return 0 00:16:09.597 18:06:25 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:09.597 18:06:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.597 18:06:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.597 [2024-10-28 18:06:25.799961] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:09.597 [2024-10-28 18:06:25.802626] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:09.597 18:06:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.597 18:06:25 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:09.597 18:06:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.597 18:06:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.597 malloc0 00:16:09.597 18:06:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.597 18:06:25 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:09.597 18:06:25 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.597 18:06:25 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:09.597 [2024-10-28 18:06:25.917066] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:09.597 [2024-10-28 18:06:25.917129] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:09.597 [2024-10-28 18:06:25.917151] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:09.597 [2024-10-28 18:06:25.923907] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:09.597 [2024-10-28 18:06:25.923940] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 2 00:16:09.597 [2024-10-28 18:06:25.923952] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:09.597 [2024-10-28 18:06:25.924042] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:16:09.597 1 00:16:09.597 18:06:25 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.597 18:06:25 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 72927 00:16:09.597 [2024-10-28 18:06:25.928875] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:16:09.597 [2024-10-28 18:06:25.936542] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:16:09.597 [2024-10-28 18:06:25.943899] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:16:09.597 [2024-10-28 18:06:25.943932] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:05.817 00:17:05.817 fio_test: (groupid=0, jobs=1): err= 0: pid=72930: Mon Oct 28 18:07:14 2024 00:17:05.817 read: IOPS=18.3k, BW=71.4MiB/s (74.9MB/s)(4283MiB/60002msec) 00:17:05.817 slat (nsec): min=1952, max=2289.3k, avg=6597.16, stdev=4275.22 00:17:05.817 clat (usec): min=1243, max=6305.7k, avg=3467.64, stdev=49976.95 00:17:05.817 lat (usec): min=1252, max=6305.7k, avg=3474.24, stdev=49976.95 00:17:05.817 clat percentiles (usec): 00:17:05.817 | 1.00th=[ 2474], 5.00th=[ 2638], 10.00th=[ 2704], 20.00th=[ 2769], 00:17:05.817 | 30.00th=[ 2868], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 3032], 00:17:05.817 | 70.00th=[ 3097], 80.00th=[ 3163], 90.00th=[ 3359], 95.00th=[ 3949], 00:17:05.817 | 99.00th=[ 5604], 99.50th=[ 6652], 99.90th=[ 7701], 99.95th=[ 8848], 00:17:05.817 | 99.99th=[13829] 00:17:05.817 bw ( KiB/s): min=33464, max=89560, per=100.00%, avg=81317.28, stdev=7849.50, samples=107 00:17:05.817 iops : min= 8366, max=22390, avg=20329.31, stdev=1962.37, samples=107 00:17:05.817 write: IOPS=18.3k, BW=71.4MiB/s (74.8MB/s)(4281MiB/60002msec); 0 zone resets 00:17:05.817 slat (usec): min=2, max=1598, avg= 6.93, stdev= 3.87 00:17:05.817 clat (usec): min=1141, max=6305.8k, avg=3522.60, stdev=46221.75 00:17:05.817 lat (usec): min=1343, max=6305.8k, avg=3529.53, stdev=46221.74 00:17:05.817 clat percentiles (usec): 00:17:05.817 | 1.00th=[ 2540], 5.00th=[ 2737], 10.00th=[ 2802], 20.00th=[ 2900], 00:17:05.817 | 30.00th=[ 2999], 40.00th=[ 3064], 50.00th=[ 3097], 60.00th=[ 3163], 00:17:05.817 | 70.00th=[ 3195], 80.00th=[ 3294], 90.00th=[ 3458], 95.00th=[ 3884], 00:17:05.817 | 99.00th=[ 5604], 99.50th=[ 6718], 99.90th=[ 7767], 99.95th=[ 8586], 00:17:05.817 | 99.99th=[13960] 00:17:05.817 bw ( KiB/s): min=33624, max=88712, per=100.00%, avg=81276.95, stdev=7818.91, samples=107 00:17:05.817 iops : min= 8406, max=22178, avg=20319.22, stdev=1954.72, samples=107 00:17:05.817 lat (msec) : 2=0.08%, 4=95.20%, 10=4.67%, 20=0.04%, >=2000=0.01% 00:17:05.817 cpu : usr=10.45%, sys=23.23%, ctx=67632, majf=0, minf=13 00:17:05.817 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:05.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:05.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:05.817 issued rwts: total=1096497,1096052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:05.817 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:05.817 00:17:05.817 Run status group 0 (all jobs): 00:17:05.817 READ: bw=71.4MiB/s (74.9MB/s), 71.4MiB/s-71.4MiB/s (74.9MB/s-74.9MB/s), io=4283MiB (4491MB), run=60002-60002msec 00:17:05.817 WRITE: bw=71.4MiB/s (74.8MB/s), 71.4MiB/s-71.4MiB/s (74.8MB/s-74.8MB/s), io=4281MiB (4489MB), run=60002-60002msec 00:17:05.817 00:17:05.817 Disk stats (read/write): 00:17:05.817 ublkb1: ios=1094267/1093783, merge=0/0, ticks=3691891/3621316, in_queue=7313208, util=99.94% 00:17:05.817 18:07:14 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:05.817 18:07:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.817 18:07:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.817 [2024-10-28 18:07:14.920964] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:05.817 [2024-10-28 18:07:14.965097] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:05.817 [2024-10-28 18:07:14.965484] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:05.817 [2024-10-28 18:07:14.972963] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:05.817 [2024-10-28 18:07:14.973124] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:05.817 [2024-10-28 18:07:14.973149] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:05.817 18:07:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.817 18:07:14 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:05.817 18:07:14 ublk_recovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.817 18:07:14 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.817 [2024-10-28 18:07:14.987077] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:05.817 [2024-10-28 18:07:14.994950] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:05.817 [2024-10-28 18:07:14.994998] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:05.817 18:07:14 ublk_recovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.817 18:07:14 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:05.817 18:07:14 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:05.817 18:07:14 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 73033 00:17:05.817 18:07:14 ublk_recovery -- common/autotest_common.sh@952 -- # '[' -z 73033 ']' 00:17:05.817 18:07:14 ublk_recovery -- common/autotest_common.sh@956 -- # kill -0 73033 00:17:05.817 18:07:14 ublk_recovery -- common/autotest_common.sh@957 -- # uname 00:17:05.817 18:07:15 ublk_recovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:05.817 18:07:15 ublk_recovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73033 00:17:05.817 killing process with pid 73033 00:17:05.817 18:07:15 ublk_recovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:05.817 18:07:15 ublk_recovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:05.817 18:07:15 ublk_recovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73033' 00:17:05.817 18:07:15 ublk_recovery -- common/autotest_common.sh@971 -- # kill 73033 00:17:05.817 18:07:15 ublk_recovery -- common/autotest_common.sh@976 -- # wait 73033 00:17:05.817 [2024-10-28 18:07:16.470562] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:05.817 [2024-10-28 18:07:16.470623] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:05.817 00:17:05.817 real 1m5.448s 00:17:05.817 user 1m47.380s 00:17:05.817 sys 0m32.860s 00:17:05.817 18:07:17 ublk_recovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:05.817 ************************************ 00:17:05.817 END TEST ublk_recovery 00:17:05.817 ************************************ 00:17:05.817 18:07:17 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:05.817 18:07:17 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:17:05.817 18:07:17 -- spdk/autotest.sh@256 -- # timing_exit lib 00:17:05.817 18:07:17 -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:05.817 18:07:17 -- common/autotest_common.sh@10 -- # set +x 00:17:05.817 18:07:17 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:17:05.817 18:07:17 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:17:05.817 18:07:17 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:17:05.817 18:07:17 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:17:05.817 18:07:17 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:05.817 18:07:17 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:05.817 18:07:17 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:17:05.817 18:07:17 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:17:05.817 18:07:17 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:17:05.817 18:07:17 -- spdk/autotest.sh@338 -- # '[' 1 -eq 1 ']' 00:17:05.817 18:07:17 -- spdk/autotest.sh@339 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:05.817 18:07:17 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:17:05.817 18:07:17 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:05.817 18:07:17 -- common/autotest_common.sh@10 -- # set +x 00:17:05.817 ************************************ 00:17:05.817 START TEST ftl 00:17:05.817 ************************************ 00:17:05.817 18:07:17 ftl -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:05.817 * Looking for test storage... 00:17:05.817 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:05.817 18:07:17 ftl -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:05.817 18:07:17 ftl -- common/autotest_common.sh@1691 -- # lcov --version 00:17:05.817 18:07:17 ftl -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:05.818 18:07:17 ftl -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:05.818 18:07:17 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:05.818 18:07:17 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:05.818 18:07:17 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:05.818 18:07:17 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:05.818 18:07:17 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:05.818 18:07:17 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:05.818 18:07:17 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:05.818 18:07:17 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:05.818 18:07:17 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:05.818 18:07:17 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:05.818 18:07:17 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:05.818 18:07:17 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:05.818 18:07:17 ftl -- scripts/common.sh@345 -- # : 1 00:17:05.818 18:07:17 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:05.818 18:07:17 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:05.818 18:07:17 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:05.818 18:07:17 ftl -- scripts/common.sh@353 -- # local d=1 00:17:05.818 18:07:17 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:05.818 18:07:17 ftl -- scripts/common.sh@355 -- # echo 1 00:17:05.818 18:07:17 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:05.818 18:07:17 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:05.818 18:07:17 ftl -- scripts/common.sh@353 -- # local d=2 00:17:05.818 18:07:17 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:05.818 18:07:17 ftl -- scripts/common.sh@355 -- # echo 2 00:17:05.818 18:07:17 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:05.818 18:07:17 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:05.818 18:07:17 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:05.818 18:07:17 ftl -- scripts/common.sh@368 -- # return 0 00:17:05.818 18:07:17 ftl -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:05.818 18:07:17 ftl -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:05.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.818 --rc genhtml_branch_coverage=1 00:17:05.818 --rc genhtml_function_coverage=1 00:17:05.818 --rc genhtml_legend=1 00:17:05.818 --rc geninfo_all_blocks=1 00:17:05.818 --rc geninfo_unexecuted_blocks=1 00:17:05.818 00:17:05.818 ' 00:17:05.818 18:07:17 ftl -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:05.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.818 --rc genhtml_branch_coverage=1 00:17:05.818 --rc genhtml_function_coverage=1 00:17:05.818 --rc genhtml_legend=1 00:17:05.818 --rc geninfo_all_blocks=1 00:17:05.818 --rc geninfo_unexecuted_blocks=1 00:17:05.818 00:17:05.818 ' 00:17:05.818 18:07:17 ftl -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:05.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.818 --rc genhtml_branch_coverage=1 00:17:05.818 --rc genhtml_function_coverage=1 00:17:05.818 --rc genhtml_legend=1 00:17:05.818 --rc geninfo_all_blocks=1 00:17:05.818 --rc geninfo_unexecuted_blocks=1 00:17:05.818 00:17:05.818 ' 00:17:05.818 18:07:17 ftl -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:05.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:05.818 --rc genhtml_branch_coverage=1 00:17:05.818 --rc genhtml_function_coverage=1 00:17:05.818 --rc genhtml_legend=1 00:17:05.818 --rc geninfo_all_blocks=1 00:17:05.818 --rc geninfo_unexecuted_blocks=1 00:17:05.818 00:17:05.818 ' 00:17:05.818 18:07:17 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:05.818 18:07:17 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:05.818 18:07:17 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:05.818 18:07:17 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:05.818 18:07:17 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:05.818 18:07:17 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:05.818 18:07:17 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.818 18:07:17 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:05.818 18:07:17 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:05.818 18:07:17 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:05.818 18:07:17 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:05.818 18:07:17 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:05.818 18:07:17 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:05.818 18:07:17 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:05.818 18:07:17 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:05.818 18:07:17 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:05.818 18:07:17 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:05.818 18:07:17 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:05.818 18:07:17 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:05.818 18:07:17 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:05.818 18:07:17 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:05.818 18:07:17 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:05.818 18:07:17 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:05.818 18:07:17 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:05.818 18:07:17 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:05.818 18:07:17 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:05.818 18:07:17 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:05.818 18:07:17 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:05.818 18:07:17 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:05.818 18:07:17 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:05.818 18:07:17 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:05.818 18:07:17 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:05.818 18:07:17 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:05.818 18:07:17 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:05.818 18:07:17 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:05.818 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:05.818 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:05.818 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:05.818 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:05.818 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:05.818 18:07:18 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=73842 00:17:05.818 18:07:18 ftl -- ftl/ftl.sh@38 -- # waitforlisten 73842 00:17:05.818 18:07:18 ftl -- common/autotest_common.sh@833 -- # '[' -z 73842 ']' 00:17:05.818 18:07:18 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:05.818 18:07:18 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:05.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:05.818 18:07:18 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:05.818 18:07:18 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:05.818 18:07:18 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:05.818 18:07:18 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:05.818 [2024-10-28 18:07:18.583541] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:17:05.818 [2024-10-28 18:07:18.583700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73842 ] 00:17:05.818 [2024-10-28 18:07:18.760524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.818 [2024-10-28 18:07:18.885528] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:05.818 18:07:19 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:05.818 18:07:19 ftl -- common/autotest_common.sh@866 -- # return 0 00:17:05.818 18:07:19 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:05.818 18:07:19 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:05.818 18:07:20 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:05.818 18:07:20 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:05.818 18:07:21 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:05.818 18:07:21 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:05.818 18:07:21 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:05.818 18:07:21 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:05.818 18:07:21 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:05.818 18:07:21 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:05.818 18:07:21 ftl -- ftl/ftl.sh@50 -- # break 00:17:05.818 18:07:21 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:05.818 18:07:21 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:05.818 18:07:21 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:05.818 18:07:21 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:05.818 18:07:22 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:05.818 18:07:22 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:05.818 18:07:22 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:05.818 18:07:22 ftl -- ftl/ftl.sh@63 -- # break 00:17:05.818 18:07:22 ftl -- ftl/ftl.sh@66 -- # killprocess 73842 00:17:05.818 18:07:22 ftl -- common/autotest_common.sh@952 -- # '[' -z 73842 ']' 00:17:05.818 18:07:22 ftl -- common/autotest_common.sh@956 -- # kill -0 73842 00:17:05.818 18:07:22 ftl -- common/autotest_common.sh@957 -- # uname 00:17:05.818 18:07:22 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:05.818 18:07:22 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73842 00:17:05.818 killing process with pid 73842 00:17:05.818 18:07:22 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:05.818 18:07:22 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:05.818 18:07:22 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73842' 00:17:05.818 18:07:22 ftl -- common/autotest_common.sh@971 -- # kill 73842 00:17:05.818 18:07:22 ftl -- common/autotest_common.sh@976 -- # wait 73842 00:17:08.348 18:07:24 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:08.348 18:07:24 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:08.348 18:07:24 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:17:08.348 18:07:24 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:08.348 18:07:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:08.348 ************************************ 00:17:08.348 START TEST ftl_fio_basic 00:17:08.348 ************************************ 00:17:08.348 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:08.349 * Looking for test storage... 00:17:08.349 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lcov --version 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:08.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.349 --rc genhtml_branch_coverage=1 00:17:08.349 --rc genhtml_function_coverage=1 00:17:08.349 --rc genhtml_legend=1 00:17:08.349 --rc geninfo_all_blocks=1 00:17:08.349 --rc geninfo_unexecuted_blocks=1 00:17:08.349 00:17:08.349 ' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:08.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.349 --rc genhtml_branch_coverage=1 00:17:08.349 --rc genhtml_function_coverage=1 00:17:08.349 --rc genhtml_legend=1 00:17:08.349 --rc geninfo_all_blocks=1 00:17:08.349 --rc geninfo_unexecuted_blocks=1 00:17:08.349 00:17:08.349 ' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:08.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.349 --rc genhtml_branch_coverage=1 00:17:08.349 --rc genhtml_function_coverage=1 00:17:08.349 --rc genhtml_legend=1 00:17:08.349 --rc geninfo_all_blocks=1 00:17:08.349 --rc geninfo_unexecuted_blocks=1 00:17:08.349 00:17:08.349 ' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:08.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.349 --rc genhtml_branch_coverage=1 00:17:08.349 --rc genhtml_function_coverage=1 00:17:08.349 --rc genhtml_legend=1 00:17:08.349 --rc geninfo_all_blocks=1 00:17:08.349 --rc geninfo_unexecuted_blocks=1 00:17:08.349 00:17:08.349 ' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=73991 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 73991 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@833 -- # '[' -z 73991 ']' 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:08.349 18:07:24 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:08.349 [2024-10-28 18:07:24.539608] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:17:08.349 [2024-10-28 18:07:24.539949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73991 ] 00:17:08.349 [2024-10-28 18:07:24.716574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:08.349 [2024-10-28 18:07:24.824061] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.349 [2024-10-28 18:07:24.824176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.349 [2024-10-28 18:07:24.824180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:09.284 18:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:09.284 18:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@866 -- # return 0 00:17:09.284 18:07:25 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:09.284 18:07:25 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:09.284 18:07:25 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:09.284 18:07:25 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:09.284 18:07:25 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:09.284 18:07:25 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:09.542 18:07:25 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:09.542 18:07:25 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:09.542 18:07:25 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:09.542 18:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:17:09.542 18:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:09.542 18:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:17:09.542 18:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:17:09.542 18:07:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:09.800 18:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:09.800 { 00:17:09.800 "name": "nvme0n1", 00:17:09.800 "aliases": [ 00:17:09.800 "8cd1ebc3-562a-4140-b5f3-5fd41d8a8bb5" 00:17:09.800 ], 00:17:09.800 "product_name": "NVMe disk", 00:17:09.800 "block_size": 4096, 00:17:09.800 "num_blocks": 1310720, 00:17:09.800 "uuid": "8cd1ebc3-562a-4140-b5f3-5fd41d8a8bb5", 00:17:09.800 "numa_id": -1, 00:17:09.800 "assigned_rate_limits": { 00:17:09.800 "rw_ios_per_sec": 0, 00:17:09.800 "rw_mbytes_per_sec": 0, 00:17:09.800 "r_mbytes_per_sec": 0, 00:17:09.800 "w_mbytes_per_sec": 0 00:17:09.800 }, 00:17:09.800 "claimed": false, 00:17:09.800 "zoned": false, 00:17:09.800 "supported_io_types": { 00:17:09.800 "read": true, 00:17:09.800 "write": true, 00:17:09.800 "unmap": true, 00:17:09.800 "flush": true, 00:17:09.800 "reset": true, 00:17:09.800 "nvme_admin": true, 00:17:09.800 "nvme_io": true, 00:17:09.800 "nvme_io_md": false, 00:17:09.800 "write_zeroes": true, 00:17:09.800 "zcopy": false, 00:17:09.800 "get_zone_info": false, 00:17:09.800 "zone_management": false, 00:17:09.800 "zone_append": false, 00:17:09.800 "compare": true, 00:17:09.800 "compare_and_write": false, 00:17:09.800 "abort": true, 00:17:09.800 "seek_hole": false, 00:17:09.800 "seek_data": false, 00:17:09.800 "copy": true, 00:17:09.800 "nvme_iov_md": false 00:17:09.800 }, 00:17:09.800 "driver_specific": { 00:17:09.800 "nvme": [ 00:17:09.800 { 00:17:09.800 "pci_address": "0000:00:11.0", 00:17:09.800 "trid": { 00:17:09.800 "trtype": "PCIe", 00:17:09.800 "traddr": "0000:00:11.0" 00:17:09.800 }, 00:17:09.800 "ctrlr_data": { 00:17:09.800 "cntlid": 0, 00:17:09.800 "vendor_id": "0x1b36", 00:17:09.800 "model_number": "QEMU NVMe Ctrl", 00:17:09.800 "serial_number": "12341", 00:17:09.800 "firmware_revision": "8.0.0", 00:17:09.800 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:09.800 "oacs": { 00:17:09.800 "security": 0, 00:17:09.800 "format": 1, 00:17:09.800 "firmware": 0, 00:17:09.800 "ns_manage": 1 00:17:09.800 }, 00:17:09.800 "multi_ctrlr": false, 00:17:09.800 "ana_reporting": false 00:17:09.800 }, 00:17:09.800 "vs": { 00:17:09.800 "nvme_version": "1.4" 00:17:09.800 }, 00:17:09.800 "ns_data": { 00:17:09.800 "id": 1, 00:17:09.800 "can_share": false 00:17:09.800 } 00:17:09.800 } 00:17:09.800 ], 00:17:09.800 "mp_policy": "active_passive" 00:17:09.800 } 00:17:09.800 } 00:17:09.800 ]' 00:17:09.800 18:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:09.800 18:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:17:09.800 18:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:09.800 18:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=1310720 00:17:09.800 18:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:17:09.800 18:07:26 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 5120 00:17:10.058 18:07:26 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:10.058 18:07:26 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:10.058 18:07:26 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:10.058 18:07:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:10.058 18:07:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:10.315 18:07:26 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:10.315 18:07:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:10.573 18:07:26 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=391979f1-47aa-4747-9e0f-84b063e0af9e 00:17:10.573 18:07:26 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 391979f1-47aa-4747-9e0f-84b063e0af9e 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=a3edf63a-bf92-471f-9267-aeacd993433f 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 a3edf63a-bf92-471f-9267-aeacd993433f 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=a3edf63a-bf92-471f-9267-aeacd993433f 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size a3edf63a-bf92-471f-9267-aeacd993433f 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=a3edf63a-bf92-471f-9267-aeacd993433f 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:17:10.830 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a3edf63a-bf92-471f-9267-aeacd993433f 00:17:11.088 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:11.088 { 00:17:11.088 "name": "a3edf63a-bf92-471f-9267-aeacd993433f", 00:17:11.088 "aliases": [ 00:17:11.088 "lvs/nvme0n1p0" 00:17:11.088 ], 00:17:11.088 "product_name": "Logical Volume", 00:17:11.088 "block_size": 4096, 00:17:11.088 "num_blocks": 26476544, 00:17:11.088 "uuid": "a3edf63a-bf92-471f-9267-aeacd993433f", 00:17:11.088 "assigned_rate_limits": { 00:17:11.088 "rw_ios_per_sec": 0, 00:17:11.088 "rw_mbytes_per_sec": 0, 00:17:11.088 "r_mbytes_per_sec": 0, 00:17:11.088 "w_mbytes_per_sec": 0 00:17:11.088 }, 00:17:11.088 "claimed": false, 00:17:11.088 "zoned": false, 00:17:11.088 "supported_io_types": { 00:17:11.088 "read": true, 00:17:11.088 "write": true, 00:17:11.088 "unmap": true, 00:17:11.088 "flush": false, 00:17:11.088 "reset": true, 00:17:11.088 "nvme_admin": false, 00:17:11.088 "nvme_io": false, 00:17:11.088 "nvme_io_md": false, 00:17:11.088 "write_zeroes": true, 00:17:11.088 "zcopy": false, 00:17:11.088 "get_zone_info": false, 00:17:11.088 "zone_management": false, 00:17:11.088 "zone_append": false, 00:17:11.088 "compare": false, 00:17:11.088 "compare_and_write": false, 00:17:11.088 "abort": false, 00:17:11.088 "seek_hole": true, 00:17:11.088 "seek_data": true, 00:17:11.088 "copy": false, 00:17:11.088 "nvme_iov_md": false 00:17:11.088 }, 00:17:11.088 "driver_specific": { 00:17:11.088 "lvol": { 00:17:11.088 "lvol_store_uuid": "391979f1-47aa-4747-9e0f-84b063e0af9e", 00:17:11.088 "base_bdev": "nvme0n1", 00:17:11.088 "thin_provision": true, 00:17:11.088 "num_allocated_clusters": 0, 00:17:11.088 "snapshot": false, 00:17:11.088 "clone": false, 00:17:11.088 "esnap_clone": false 00:17:11.088 } 00:17:11.088 } 00:17:11.088 } 00:17:11.088 ]' 00:17:11.088 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:11.088 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:17:11.088 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:11.088 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:11.088 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:11.088 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:17:11.088 18:07:27 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:11.088 18:07:27 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:11.088 18:07:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:11.654 18:07:27 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:11.654 18:07:27 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:11.654 18:07:27 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size a3edf63a-bf92-471f-9267-aeacd993433f 00:17:11.654 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=a3edf63a-bf92-471f-9267-aeacd993433f 00:17:11.654 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:11.654 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:17:11.654 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:17:11.654 18:07:27 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a3edf63a-bf92-471f-9267-aeacd993433f 00:17:11.654 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:11.654 { 00:17:11.654 "name": "a3edf63a-bf92-471f-9267-aeacd993433f", 00:17:11.654 "aliases": [ 00:17:11.654 "lvs/nvme0n1p0" 00:17:11.654 ], 00:17:11.654 "product_name": "Logical Volume", 00:17:11.654 "block_size": 4096, 00:17:11.654 "num_blocks": 26476544, 00:17:11.654 "uuid": "a3edf63a-bf92-471f-9267-aeacd993433f", 00:17:11.654 "assigned_rate_limits": { 00:17:11.654 "rw_ios_per_sec": 0, 00:17:11.654 "rw_mbytes_per_sec": 0, 00:17:11.654 "r_mbytes_per_sec": 0, 00:17:11.654 "w_mbytes_per_sec": 0 00:17:11.654 }, 00:17:11.654 "claimed": false, 00:17:11.654 "zoned": false, 00:17:11.654 "supported_io_types": { 00:17:11.654 "read": true, 00:17:11.654 "write": true, 00:17:11.654 "unmap": true, 00:17:11.654 "flush": false, 00:17:11.654 "reset": true, 00:17:11.654 "nvme_admin": false, 00:17:11.654 "nvme_io": false, 00:17:11.654 "nvme_io_md": false, 00:17:11.654 "write_zeroes": true, 00:17:11.654 "zcopy": false, 00:17:11.654 "get_zone_info": false, 00:17:11.654 "zone_management": false, 00:17:11.654 "zone_append": false, 00:17:11.654 "compare": false, 00:17:11.654 "compare_and_write": false, 00:17:11.654 "abort": false, 00:17:11.654 "seek_hole": true, 00:17:11.654 "seek_data": true, 00:17:11.654 "copy": false, 00:17:11.654 "nvme_iov_md": false 00:17:11.654 }, 00:17:11.654 "driver_specific": { 00:17:11.654 "lvol": { 00:17:11.654 "lvol_store_uuid": "391979f1-47aa-4747-9e0f-84b063e0af9e", 00:17:11.654 "base_bdev": "nvme0n1", 00:17:11.654 "thin_provision": true, 00:17:11.654 "num_allocated_clusters": 0, 00:17:11.654 "snapshot": false, 00:17:11.654 "clone": false, 00:17:11.654 "esnap_clone": false 00:17:11.654 } 00:17:11.654 } 00:17:11.654 } 00:17:11.654 ]' 00:17:11.654 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:11.912 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:17:11.912 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:11.912 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:11.912 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:11.912 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:17:11.912 18:07:28 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:11.912 18:07:28 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:12.170 18:07:28 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:12.170 18:07:28 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:12.170 18:07:28 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:12.170 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:12.170 18:07:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size a3edf63a-bf92-471f-9267-aeacd993433f 00:17:12.170 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1380 -- # local bdev_name=a3edf63a-bf92-471f-9267-aeacd993433f 00:17:12.170 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1381 -- # local bdev_info 00:17:12.170 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bs 00:17:12.170 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local nb 00:17:12.170 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a3edf63a-bf92-471f-9267-aeacd993433f 00:17:12.427 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:17:12.427 { 00:17:12.427 "name": "a3edf63a-bf92-471f-9267-aeacd993433f", 00:17:12.427 "aliases": [ 00:17:12.427 "lvs/nvme0n1p0" 00:17:12.427 ], 00:17:12.427 "product_name": "Logical Volume", 00:17:12.427 "block_size": 4096, 00:17:12.427 "num_blocks": 26476544, 00:17:12.427 "uuid": "a3edf63a-bf92-471f-9267-aeacd993433f", 00:17:12.427 "assigned_rate_limits": { 00:17:12.427 "rw_ios_per_sec": 0, 00:17:12.427 "rw_mbytes_per_sec": 0, 00:17:12.427 "r_mbytes_per_sec": 0, 00:17:12.427 "w_mbytes_per_sec": 0 00:17:12.427 }, 00:17:12.427 "claimed": false, 00:17:12.427 "zoned": false, 00:17:12.427 "supported_io_types": { 00:17:12.427 "read": true, 00:17:12.427 "write": true, 00:17:12.427 "unmap": true, 00:17:12.427 "flush": false, 00:17:12.427 "reset": true, 00:17:12.427 "nvme_admin": false, 00:17:12.427 "nvme_io": false, 00:17:12.427 "nvme_io_md": false, 00:17:12.427 "write_zeroes": true, 00:17:12.427 "zcopy": false, 00:17:12.427 "get_zone_info": false, 00:17:12.427 "zone_management": false, 00:17:12.427 "zone_append": false, 00:17:12.427 "compare": false, 00:17:12.427 "compare_and_write": false, 00:17:12.427 "abort": false, 00:17:12.427 "seek_hole": true, 00:17:12.427 "seek_data": true, 00:17:12.427 "copy": false, 00:17:12.427 "nvme_iov_md": false 00:17:12.427 }, 00:17:12.427 "driver_specific": { 00:17:12.427 "lvol": { 00:17:12.427 "lvol_store_uuid": "391979f1-47aa-4747-9e0f-84b063e0af9e", 00:17:12.427 "base_bdev": "nvme0n1", 00:17:12.427 "thin_provision": true, 00:17:12.428 "num_allocated_clusters": 0, 00:17:12.428 "snapshot": false, 00:17:12.428 "clone": false, 00:17:12.428 "esnap_clone": false 00:17:12.428 } 00:17:12.428 } 00:17:12.428 } 00:17:12.428 ]' 00:17:12.428 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:17:12.428 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # bs=4096 00:17:12.428 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:17:12.428 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # nb=26476544 00:17:12.428 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:17:12.428 18:07:28 ftl.ftl_fio_basic -- common/autotest_common.sh@1390 -- # echo 103424 00:17:12.428 18:07:28 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:12.428 18:07:28 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:12.428 18:07:28 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d a3edf63a-bf92-471f-9267-aeacd993433f -c nvc0n1p0 --l2p_dram_limit 60 00:17:12.686 [2024-10-28 18:07:29.128570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.686 [2024-10-28 18:07:29.128872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:12.686 [2024-10-28 18:07:29.128914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:17:12.686 [2024-10-28 18:07:29.128940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.686 [2024-10-28 18:07:29.129061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.686 [2024-10-28 18:07:29.129085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:12.686 [2024-10-28 18:07:29.129100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:17:12.686 [2024-10-28 18:07:29.129112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.686 [2024-10-28 18:07:29.129173] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:12.686 [2024-10-28 18:07:29.130219] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:12.686 [2024-10-28 18:07:29.130253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.686 [2024-10-28 18:07:29.130266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:12.686 [2024-10-28 18:07:29.130280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.103 ms 00:17:12.686 [2024-10-28 18:07:29.130292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.686 [2024-10-28 18:07:29.130451] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 72969a5f-3b46-440b-a991-835ec41e9851 00:17:12.686 [2024-10-28 18:07:29.131589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.686 [2024-10-28 18:07:29.131640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:12.686 [2024-10-28 18:07:29.131658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:17:12.686 [2024-10-28 18:07:29.131673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.686 [2024-10-28 18:07:29.136412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.686 [2024-10-28 18:07:29.136472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:12.686 [2024-10-28 18:07:29.136490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:17:12.686 [2024-10-28 18:07:29.136504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.686 [2024-10-28 18:07:29.136653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.686 [2024-10-28 18:07:29.136679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:12.686 [2024-10-28 18:07:29.136693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.103 ms 00:17:12.686 [2024-10-28 18:07:29.136713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.686 [2024-10-28 18:07:29.136808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.686 [2024-10-28 18:07:29.136831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:12.686 [2024-10-28 18:07:29.136865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:17:12.686 [2024-10-28 18:07:29.136880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.686 [2024-10-28 18:07:29.136922] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:12.686 [2024-10-28 18:07:29.141504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.686 [2024-10-28 18:07:29.141547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:12.686 [2024-10-28 18:07:29.141569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.588 ms 00:17:12.686 [2024-10-28 18:07:29.141585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.686 [2024-10-28 18:07:29.141647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.686 [2024-10-28 18:07:29.141664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:12.686 [2024-10-28 18:07:29.141679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:17:12.686 [2024-10-28 18:07:29.141690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.686 [2024-10-28 18:07:29.141765] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:12.686 [2024-10-28 18:07:29.141974] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:12.686 [2024-10-28 18:07:29.142015] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:12.686 [2024-10-28 18:07:29.142032] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:12.686 [2024-10-28 18:07:29.142049] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:12.686 [2024-10-28 18:07:29.142063] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:12.686 [2024-10-28 18:07:29.142078] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:12.687 [2024-10-28 18:07:29.142090] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:12.687 [2024-10-28 18:07:29.142103] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:12.687 [2024-10-28 18:07:29.142114] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:12.687 [2024-10-28 18:07:29.142128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.687 [2024-10-28 18:07:29.142142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:12.687 [2024-10-28 18:07:29.142159] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.367 ms 00:17:12.687 [2024-10-28 18:07:29.142171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.687 [2024-10-28 18:07:29.142285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.687 [2024-10-28 18:07:29.142302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:12.687 [2024-10-28 18:07:29.142317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.075 ms 00:17:12.687 [2024-10-28 18:07:29.142329] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.687 [2024-10-28 18:07:29.142460] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:12.687 [2024-10-28 18:07:29.142476] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:12.687 [2024-10-28 18:07:29.142493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:12.687 [2024-10-28 18:07:29.142506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.687 [2024-10-28 18:07:29.142520] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:12.687 [2024-10-28 18:07:29.142531] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:12.687 [2024-10-28 18:07:29.142544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:12.687 [2024-10-28 18:07:29.142555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:12.687 [2024-10-28 18:07:29.142568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:12.687 [2024-10-28 18:07:29.142579] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:12.687 [2024-10-28 18:07:29.142592] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:12.687 [2024-10-28 18:07:29.142603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:12.687 [2024-10-28 18:07:29.142615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:12.687 [2024-10-28 18:07:29.142627] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:12.687 [2024-10-28 18:07:29.142643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:12.687 [2024-10-28 18:07:29.142655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.687 [2024-10-28 18:07:29.142672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:12.687 [2024-10-28 18:07:29.142683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:12.687 [2024-10-28 18:07:29.142696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.687 [2024-10-28 18:07:29.142707] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:12.687 [2024-10-28 18:07:29.142721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:12.687 [2024-10-28 18:07:29.142731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.687 [2024-10-28 18:07:29.142744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:12.687 [2024-10-28 18:07:29.142755] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:12.687 [2024-10-28 18:07:29.142768] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.687 [2024-10-28 18:07:29.142779] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:12.687 [2024-10-28 18:07:29.142792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:12.687 [2024-10-28 18:07:29.142802] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.687 [2024-10-28 18:07:29.142815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:12.687 [2024-10-28 18:07:29.142826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:12.687 [2024-10-28 18:07:29.142853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:12.687 [2024-10-28 18:07:29.142867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:12.687 [2024-10-28 18:07:29.142881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:12.687 [2024-10-28 18:07:29.142892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:12.687 [2024-10-28 18:07:29.142905] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:12.687 [2024-10-28 18:07:29.142937] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:12.687 [2024-10-28 18:07:29.142951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:12.687 [2024-10-28 18:07:29.142962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:12.687 [2024-10-28 18:07:29.142975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:12.687 [2024-10-28 18:07:29.142986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.687 [2024-10-28 18:07:29.142999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:12.687 [2024-10-28 18:07:29.143010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:12.687 [2024-10-28 18:07:29.143025] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.687 [2024-10-28 18:07:29.143036] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:12.687 [2024-10-28 18:07:29.143050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:12.687 [2024-10-28 18:07:29.143061] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:12.687 [2024-10-28 18:07:29.143077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:12.687 [2024-10-28 18:07:29.143090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:12.687 [2024-10-28 18:07:29.143105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:12.687 [2024-10-28 18:07:29.143116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:12.687 [2024-10-28 18:07:29.143130] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:12.687 [2024-10-28 18:07:29.143140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:12.687 [2024-10-28 18:07:29.143154] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:12.687 [2024-10-28 18:07:29.143170] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:12.687 [2024-10-28 18:07:29.143187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:12.687 [2024-10-28 18:07:29.143200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:12.687 [2024-10-28 18:07:29.143214] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:12.687 [2024-10-28 18:07:29.143226] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:12.687 [2024-10-28 18:07:29.143239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:12.687 [2024-10-28 18:07:29.143250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:12.687 [2024-10-28 18:07:29.143264] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:12.687 [2024-10-28 18:07:29.143275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:12.687 [2024-10-28 18:07:29.143289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:12.687 [2024-10-28 18:07:29.143300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:12.687 [2024-10-28 18:07:29.143315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:12.687 [2024-10-28 18:07:29.143327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:12.687 [2024-10-28 18:07:29.143342] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:12.687 [2024-10-28 18:07:29.143354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:12.687 [2024-10-28 18:07:29.143367] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:12.687 [2024-10-28 18:07:29.143379] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:12.687 [2024-10-28 18:07:29.143394] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:12.687 [2024-10-28 18:07:29.143409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:12.687 [2024-10-28 18:07:29.143423] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:12.687 [2024-10-28 18:07:29.143434] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:12.687 [2024-10-28 18:07:29.143448] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:12.687 [2024-10-28 18:07:29.143461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:12.687 [2024-10-28 18:07:29.143474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:12.687 [2024-10-28 18:07:29.143487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.076 ms 00:17:12.687 [2024-10-28 18:07:29.143503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:12.687 [2024-10-28 18:07:29.143578] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:12.687 [2024-10-28 18:07:29.143600] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:15.967 [2024-10-28 18:07:32.163678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.163783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:15.967 [2024-10-28 18:07:32.163808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3020.117 ms 00:17:15.967 [2024-10-28 18:07:32.163824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.198003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.198076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:15.967 [2024-10-28 18:07:32.198098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.885 ms 00:17:15.967 [2024-10-28 18:07:32.198114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.198302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.198327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:15.967 [2024-10-28 18:07:32.198341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:17:15.967 [2024-10-28 18:07:32.198358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.249919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.250018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:15.967 [2024-10-28 18:07:32.250050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.490 ms 00:17:15.967 [2024-10-28 18:07:32.250074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.250158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.250183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:15.967 [2024-10-28 18:07:32.250201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:17:15.967 [2024-10-28 18:07:32.250219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.250789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.250891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:15.967 [2024-10-28 18:07:32.250932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:17:15.967 [2024-10-28 18:07:32.250975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.251315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.251387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:15.967 [2024-10-28 18:07:32.251440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:17:15.967 [2024-10-28 18:07:32.251480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.273144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.273403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:15.967 [2024-10-28 18:07:32.273453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.587 ms 00:17:15.967 [2024-10-28 18:07:32.273484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.287404] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:15.967 [2024-10-28 18:07:32.301708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.301808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:15.967 [2024-10-28 18:07:32.301853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.032 ms 00:17:15.967 [2024-10-28 18:07:32.301873] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.359258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.359334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:15.967 [2024-10-28 18:07:32.359362] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 57.311 ms 00:17:15.967 [2024-10-28 18:07:32.359375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.359636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.359659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:15.967 [2024-10-28 18:07:32.359678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:17:15.967 [2024-10-28 18:07:32.359690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.391835] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.391914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:15.967 [2024-10-28 18:07:32.391956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.033 ms 00:17:15.967 [2024-10-28 18:07:32.391969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.423070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.423115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:15.967 [2024-10-28 18:07:32.423153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.038 ms 00:17:15.967 [2024-10-28 18:07:32.423165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:15.967 [2024-10-28 18:07:32.424005] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:15.967 [2024-10-28 18:07:32.424059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:15.967 [2024-10-28 18:07:32.424096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.780 ms 00:17:15.967 [2024-10-28 18:07:32.424108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.225 [2024-10-28 18:07:32.512585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.225 [2024-10-28 18:07:32.512647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:16.225 [2024-10-28 18:07:32.512676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 88.386 ms 00:17:16.225 [2024-10-28 18:07:32.512693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.225 [2024-10-28 18:07:32.547348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.225 [2024-10-28 18:07:32.547407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:16.225 [2024-10-28 18:07:32.547431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.503 ms 00:17:16.225 [2024-10-28 18:07:32.547444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.225 [2024-10-28 18:07:32.579679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.225 [2024-10-28 18:07:32.579743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:16.225 [2024-10-28 18:07:32.579767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.165 ms 00:17:16.225 [2024-10-28 18:07:32.579779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.225 [2024-10-28 18:07:32.612128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.225 [2024-10-28 18:07:32.612205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:16.226 [2024-10-28 18:07:32.612232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.255 ms 00:17:16.226 [2024-10-28 18:07:32.612245] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.226 [2024-10-28 18:07:32.612333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.226 [2024-10-28 18:07:32.612351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:16.226 [2024-10-28 18:07:32.612371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:16.226 [2024-10-28 18:07:32.612386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.226 [2024-10-28 18:07:32.612568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:16.226 [2024-10-28 18:07:32.612594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:16.226 [2024-10-28 18:07:32.612611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:17:16.226 [2024-10-28 18:07:32.612623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:16.226 [2024-10-28 18:07:32.614057] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3484.881 ms, result 0 00:17:16.226 { 00:17:16.226 "name": "ftl0", 00:17:16.226 "uuid": "72969a5f-3b46-440b-a991-835ec41e9851" 00:17:16.226 } 00:17:16.226 18:07:32 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:16.226 18:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:17:16.226 18:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:17:16.226 18:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local i 00:17:16.226 18:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:17:16.226 18:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:17:16.226 18:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:16.485 18:07:32 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:17.050 [ 00:17:17.050 { 00:17:17.050 "name": "ftl0", 00:17:17.050 "aliases": [ 00:17:17.050 "72969a5f-3b46-440b-a991-835ec41e9851" 00:17:17.050 ], 00:17:17.050 "product_name": "FTL disk", 00:17:17.050 "block_size": 4096, 00:17:17.050 "num_blocks": 20971520, 00:17:17.050 "uuid": "72969a5f-3b46-440b-a991-835ec41e9851", 00:17:17.050 "assigned_rate_limits": { 00:17:17.050 "rw_ios_per_sec": 0, 00:17:17.050 "rw_mbytes_per_sec": 0, 00:17:17.050 "r_mbytes_per_sec": 0, 00:17:17.050 "w_mbytes_per_sec": 0 00:17:17.050 }, 00:17:17.050 "claimed": false, 00:17:17.050 "zoned": false, 00:17:17.050 "supported_io_types": { 00:17:17.050 "read": true, 00:17:17.050 "write": true, 00:17:17.050 "unmap": true, 00:17:17.050 "flush": true, 00:17:17.050 "reset": false, 00:17:17.050 "nvme_admin": false, 00:17:17.050 "nvme_io": false, 00:17:17.050 "nvme_io_md": false, 00:17:17.050 "write_zeroes": true, 00:17:17.050 "zcopy": false, 00:17:17.050 "get_zone_info": false, 00:17:17.050 "zone_management": false, 00:17:17.050 "zone_append": false, 00:17:17.050 "compare": false, 00:17:17.050 "compare_and_write": false, 00:17:17.050 "abort": false, 00:17:17.050 "seek_hole": false, 00:17:17.050 "seek_data": false, 00:17:17.050 "copy": false, 00:17:17.050 "nvme_iov_md": false 00:17:17.050 }, 00:17:17.050 "driver_specific": { 00:17:17.050 "ftl": { 00:17:17.050 "base_bdev": "a3edf63a-bf92-471f-9267-aeacd993433f", 00:17:17.050 "cache": "nvc0n1p0" 00:17:17.050 } 00:17:17.050 } 00:17:17.050 } 00:17:17.050 ] 00:17:17.050 18:07:33 ftl.ftl_fio_basic -- common/autotest_common.sh@909 -- # return 0 00:17:17.050 18:07:33 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:17.050 18:07:33 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:17.308 18:07:33 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:17.308 18:07:33 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:17.566 [2024-10-28 18:07:33.895359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.566 [2024-10-28 18:07:33.895427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:17.566 [2024-10-28 18:07:33.895467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:17.566 [2024-10-28 18:07:33.895481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.566 [2024-10-28 18:07:33.895538] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:17.566 [2024-10-28 18:07:33.898958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.566 [2024-10-28 18:07:33.898994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:17.566 [2024-10-28 18:07:33.899012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.391 ms 00:17:17.566 [2024-10-28 18:07:33.899024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.566 [2024-10-28 18:07:33.899487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.566 [2024-10-28 18:07:33.899545] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:17.566 [2024-10-28 18:07:33.899562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.419 ms 00:17:17.566 [2024-10-28 18:07:33.899575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.566 [2024-10-28 18:07:33.902882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.566 [2024-10-28 18:07:33.902927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:17.566 [2024-10-28 18:07:33.902963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.276 ms 00:17:17.566 [2024-10-28 18:07:33.902975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.566 [2024-10-28 18:07:33.909339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.566 [2024-10-28 18:07:33.909563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:17.566 [2024-10-28 18:07:33.909598] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.331 ms 00:17:17.566 [2024-10-28 18:07:33.909612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.566 [2024-10-28 18:07:33.939503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.566 [2024-10-28 18:07:33.939714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:17.566 [2024-10-28 18:07:33.939751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.779 ms 00:17:17.566 [2024-10-28 18:07:33.939765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.566 [2024-10-28 18:07:33.958219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.566 [2024-10-28 18:07:33.958262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:17.566 [2024-10-28 18:07:33.958298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.326 ms 00:17:17.566 [2024-10-28 18:07:33.958314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.566 [2024-10-28 18:07:33.958533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.566 [2024-10-28 18:07:33.958555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:17.566 [2024-10-28 18:07:33.958570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.156 ms 00:17:17.566 [2024-10-28 18:07:33.958581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.566 [2024-10-28 18:07:33.988592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.566 [2024-10-28 18:07:33.988785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:17.567 [2024-10-28 18:07:33.988820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.972 ms 00:17:17.567 [2024-10-28 18:07:33.988856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.567 [2024-10-28 18:07:34.018954] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.567 [2024-10-28 18:07:34.018997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:17.567 [2024-10-28 18:07:34.019034] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.030 ms 00:17:17.567 [2024-10-28 18:07:34.019046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.826 [2024-10-28 18:07:34.049114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.826 [2024-10-28 18:07:34.049286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:17.826 [2024-10-28 18:07:34.049321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.007 ms 00:17:17.826 [2024-10-28 18:07:34.049334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.826 [2024-10-28 18:07:34.079494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.826 [2024-10-28 18:07:34.079534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:17.826 [2024-10-28 18:07:34.079569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.011 ms 00:17:17.826 [2024-10-28 18:07:34.079581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.826 [2024-10-28 18:07:34.079639] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:17.826 [2024-10-28 18:07:34.079662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.079984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:17.826 [2024-10-28 18:07:34.080243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.080982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.081006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.081019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.081033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.081048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.081062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.081074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.081090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:17.827 [2024-10-28 18:07:34.081112] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:17.827 [2024-10-28 18:07:34.081127] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 72969a5f-3b46-440b-a991-835ec41e9851 00:17:17.827 [2024-10-28 18:07:34.081139] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:17.827 [2024-10-28 18:07:34.081154] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:17.827 [2024-10-28 18:07:34.081165] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:17.827 [2024-10-28 18:07:34.081181] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:17.827 [2024-10-28 18:07:34.081193] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:17.827 [2024-10-28 18:07:34.081206] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:17.827 [2024-10-28 18:07:34.081218] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:17.827 [2024-10-28 18:07:34.081230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:17.827 [2024-10-28 18:07:34.081241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:17.827 [2024-10-28 18:07:34.081255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.827 [2024-10-28 18:07:34.081266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:17.827 [2024-10-28 18:07:34.081281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.619 ms 00:17:17.827 [2024-10-28 18:07:34.081293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.827 [2024-10-28 18:07:34.097703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.827 [2024-10-28 18:07:34.097929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:17.827 [2024-10-28 18:07:34.097965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.333 ms 00:17:17.827 [2024-10-28 18:07:34.097980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.827 [2024-10-28 18:07:34.098428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:17.827 [2024-10-28 18:07:34.098451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:17.827 [2024-10-28 18:07:34.098467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.407 ms 00:17:17.827 [2024-10-28 18:07:34.098478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.827 [2024-10-28 18:07:34.153886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.827 [2024-10-28 18:07:34.153955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:17.827 [2024-10-28 18:07:34.153993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.827 [2024-10-28 18:07:34.154005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.827 [2024-10-28 18:07:34.154104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.827 [2024-10-28 18:07:34.154127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:17.827 [2024-10-28 18:07:34.154142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.827 [2024-10-28 18:07:34.154154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.827 [2024-10-28 18:07:34.154290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.827 [2024-10-28 18:07:34.154310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:17.827 [2024-10-28 18:07:34.154329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.827 [2024-10-28 18:07:34.154340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.827 [2024-10-28 18:07:34.154377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.827 [2024-10-28 18:07:34.154390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:17.827 [2024-10-28 18:07:34.154404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.827 [2024-10-28 18:07:34.154416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:17.827 [2024-10-28 18:07:34.258312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:17.827 [2024-10-28 18:07:34.258382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:17.827 [2024-10-28 18:07:34.258420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:17.827 [2024-10-28 18:07:34.258432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.086 [2024-10-28 18:07:34.339515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.086 [2024-10-28 18:07:34.339579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:18.086 [2024-10-28 18:07:34.339618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.086 [2024-10-28 18:07:34.339630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.086 [2024-10-28 18:07:34.339766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.086 [2024-10-28 18:07:34.339786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:18.086 [2024-10-28 18:07:34.339800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.086 [2024-10-28 18:07:34.339814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.086 [2024-10-28 18:07:34.339961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.086 [2024-10-28 18:07:34.339983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:18.086 [2024-10-28 18:07:34.339998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.086 [2024-10-28 18:07:34.340010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.086 [2024-10-28 18:07:34.340156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.086 [2024-10-28 18:07:34.340176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:18.086 [2024-10-28 18:07:34.340192] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.086 [2024-10-28 18:07:34.340203] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.086 [2024-10-28 18:07:34.340282] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.086 [2024-10-28 18:07:34.340301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:18.086 [2024-10-28 18:07:34.340315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.086 [2024-10-28 18:07:34.340327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.086 [2024-10-28 18:07:34.340385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.086 [2024-10-28 18:07:34.340401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:18.086 [2024-10-28 18:07:34.340415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.086 [2024-10-28 18:07:34.340426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.086 [2024-10-28 18:07:34.340496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:18.086 [2024-10-28 18:07:34.340513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:18.086 [2024-10-28 18:07:34.340527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:18.086 [2024-10-28 18:07:34.340539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:18.086 [2024-10-28 18:07:34.340729] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 445.343 ms, result 0 00:17:18.086 true 00:17:18.086 18:07:34 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 73991 00:17:18.086 18:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@952 -- # '[' -z 73991 ']' 00:17:18.086 18:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@956 -- # kill -0 73991 00:17:18.086 18:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # uname 00:17:18.086 18:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:18.086 18:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 73991 00:17:18.086 killing process with pid 73991 00:17:18.086 18:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:18.086 18:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:18.086 18:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 73991' 00:17:18.086 18:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@971 -- # kill 73991 00:17:18.086 18:07:34 ftl.ftl_fio_basic -- common/autotest_common.sh@976 -- # wait 73991 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:23.505 18:07:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:17:23.505 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:17:23.505 fio-3.35 00:17:23.505 Starting 1 thread 00:17:28.771 00:17:28.771 test: (groupid=0, jobs=1): err= 0: pid=74205: Mon Oct 28 18:07:44 2024 00:17:28.771 read: IOPS=926, BW=61.5MiB/s (64.5MB/s)(255MiB/4137msec) 00:17:28.771 slat (nsec): min=5670, max=49968, avg=7286.20, stdev=3011.54 00:17:28.771 clat (usec): min=326, max=935, avg=483.20, stdev=60.34 00:17:28.771 lat (usec): min=342, max=941, avg=490.48, stdev=60.87 00:17:28.771 clat percentiles (usec): 00:17:28.771 | 1.00th=[ 371], 5.00th=[ 388], 10.00th=[ 416], 20.00th=[ 445], 00:17:28.771 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 486], 00:17:28.771 | 70.00th=[ 506], 80.00th=[ 529], 90.00th=[ 562], 95.00th=[ 586], 00:17:28.771 | 99.00th=[ 652], 99.50th=[ 693], 99.90th=[ 783], 99.95th=[ 930], 00:17:28.771 | 99.99th=[ 938] 00:17:28.771 write: IOPS=933, BW=62.0MiB/s (65.0MB/s)(256MiB/4132msec); 0 zone resets 00:17:28.771 slat (nsec): min=18869, max=91148, avg=24414.08, stdev=5102.06 00:17:28.771 clat (usec): min=375, max=1533, avg=547.00, stdev=70.55 00:17:28.771 lat (usec): min=397, max=1554, avg=571.41, stdev=70.80 00:17:28.771 clat percentiles (usec): 00:17:28.771 | 1.00th=[ 408], 5.00th=[ 465], 10.00th=[ 474], 20.00th=[ 486], 00:17:28.771 | 30.00th=[ 502], 40.00th=[ 529], 50.00th=[ 545], 60.00th=[ 553], 00:17:28.771 | 70.00th=[ 570], 80.00th=[ 594], 90.00th=[ 635], 95.00th=[ 660], 00:17:28.771 | 99.00th=[ 783], 99.50th=[ 832], 99.90th=[ 922], 99.95th=[ 1004], 00:17:28.771 | 99.99th=[ 1532] 00:17:28.771 bw ( KiB/s): min=62016, max=64872, per=100.00%, avg=63529.00, stdev=1000.55, samples=8 00:17:28.771 iops : min= 912, max= 954, avg=934.25, stdev=14.71, samples=8 00:17:28.771 lat (usec) : 500=47.94%, 750=51.18%, 1000=0.86% 00:17:28.771 lat (msec) : 2=0.03% 00:17:28.771 cpu : usr=99.23%, sys=0.10%, ctx=8, majf=0, minf=1169 00:17:28.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:28.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:28.771 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:28.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:28.771 00:17:28.771 Run status group 0 (all jobs): 00:17:28.771 READ: bw=61.5MiB/s (64.5MB/s), 61.5MiB/s-61.5MiB/s (64.5MB/s-64.5MB/s), io=255MiB (267MB), run=4137-4137msec 00:17:28.771 WRITE: bw=62.0MiB/s (65.0MB/s), 62.0MiB/s-62.0MiB/s (65.0MB/s-65.0MB/s), io=256MiB (269MB), run=4132-4132msec 00:17:30.147 ----------------------------------------------------- 00:17:30.147 Suppressions used: 00:17:30.147 count bytes template 00:17:30.147 1 5 /usr/src/fio/parse.c 00:17:30.147 1 8 libtcmalloc_minimal.so 00:17:30.147 1 904 libcrypto.so 00:17:30.147 ----------------------------------------------------- 00:17:30.147 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:17:30.147 18:07:46 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:17:30.405 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:30.405 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:17:30.405 fio-3.35 00:17:30.405 Starting 2 threads 00:18:02.467 00:18:02.467 first_half: (groupid=0, jobs=1): err= 0: pid=74309: Mon Oct 28 18:08:17 2024 00:18:02.467 read: IOPS=2203, BW=8813KiB/s (9025kB/s)(255MiB/29611msec) 00:18:02.467 slat (nsec): min=4529, max=47657, avg=7109.63, stdev=1987.38 00:18:02.467 clat (usec): min=966, max=316111, avg=42762.84, stdev=21666.76 00:18:02.467 lat (usec): min=975, max=316116, avg=42769.95, stdev=21666.91 00:18:02.467 clat percentiles (msec): 00:18:02.467 | 1.00th=[ 11], 5.00th=[ 33], 10.00th=[ 39], 20.00th=[ 39], 00:18:02.467 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 40], 60.00th=[ 41], 00:18:02.467 | 70.00th=[ 41], 80.00th=[ 44], 90.00th=[ 47], 95.00th=[ 49], 00:18:02.467 | 99.00th=[ 171], 99.50th=[ 197], 99.90th=[ 247], 99.95th=[ 271], 00:18:02.467 | 99.99th=[ 305] 00:18:02.467 write: IOPS=2594, BW=10.1MiB/s (10.6MB/s)(256MiB/25257msec); 0 zone resets 00:18:02.467 slat (usec): min=5, max=398, avg= 9.72, stdev= 6.50 00:18:02.467 clat (usec): min=432, max=114008, avg=15203.21, stdev=25539.02 00:18:02.467 lat (usec): min=443, max=114020, avg=15212.93, stdev=25539.37 00:18:02.467 clat percentiles (usec): 00:18:02.467 | 1.00th=[ 963], 5.00th=[ 1319], 10.00th=[ 1549], 20.00th=[ 2024], 00:18:02.467 | 30.00th=[ 3720], 40.00th=[ 5604], 50.00th=[ 6849], 60.00th=[ 7767], 00:18:02.467 | 70.00th=[ 9241], 80.00th=[ 13435], 90.00th=[ 43254], 95.00th=[ 92799], 00:18:02.467 | 99.00th=[101188], 99.50th=[103285], 99.90th=[108528], 99.95th=[109577], 00:18:02.467 | 99.99th=[112722] 00:18:02.467 bw ( KiB/s): min= 224, max=43112, per=87.09%, avg=18078.90, stdev=11254.58, samples=29 00:18:02.467 iops : min= 56, max=10778, avg=4519.72, stdev=2813.64, samples=29 00:18:02.467 lat (usec) : 500=0.01%, 750=0.06%, 1000=0.57% 00:18:02.467 lat (msec) : 2=9.39%, 4=6.04%, 10=20.40%, 20=9.73%, 50=46.85% 00:18:02.467 lat (msec) : 100=4.96%, 250=1.96%, 500=0.05% 00:18:02.467 cpu : usr=99.09%, sys=0.19%, ctx=54, majf=0, minf=5585 00:18:02.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:02.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.467 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:02.467 issued rwts: total=65241,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:02.467 second_half: (groupid=0, jobs=1): err= 0: pid=74310: Mon Oct 28 18:08:17 2024 00:18:02.467 read: IOPS=2215, BW=8863KiB/s (9076kB/s)(254MiB/29399msec) 00:18:02.467 slat (nsec): min=4617, max=44289, avg=7251.92, stdev=2075.34 00:18:02.467 clat (usec): min=854, max=319780, avg=43927.04, stdev=21078.77 00:18:02.467 lat (usec): min=862, max=319788, avg=43934.29, stdev=21078.96 00:18:02.467 clat percentiles (msec): 00:18:02.467 | 1.00th=[ 6], 5.00th=[ 39], 10.00th=[ 39], 20.00th=[ 39], 00:18:02.467 | 30.00th=[ 40], 40.00th=[ 40], 50.00th=[ 41], 60.00th=[ 41], 00:18:02.467 | 70.00th=[ 41], 80.00th=[ 44], 90.00th=[ 47], 95.00th=[ 56], 00:18:02.467 | 99.00th=[ 159], 99.50th=[ 190], 99.90th=[ 228], 99.95th=[ 234], 00:18:02.467 | 99.99th=[ 313] 00:18:02.467 write: IOPS=3415, BW=13.3MiB/s (14.0MB/s)(256MiB/19189msec); 0 zone resets 00:18:02.467 slat (usec): min=5, max=416, avg= 9.59, stdev= 5.57 00:18:02.467 clat (usec): min=471, max=115018, avg=13730.19, stdev=25166.72 00:18:02.467 lat (usec): min=480, max=115026, avg=13739.78, stdev=25166.82 00:18:02.467 clat percentiles (usec): 00:18:02.467 | 1.00th=[ 1090], 5.00th=[ 1369], 10.00th=[ 1549], 20.00th=[ 1827], 00:18:02.467 | 30.00th=[ 2212], 40.00th=[ 3851], 50.00th=[ 5473], 60.00th=[ 6849], 00:18:02.467 | 70.00th=[ 8291], 80.00th=[ 12911], 90.00th=[ 19792], 95.00th=[ 91751], 00:18:02.467 | 99.00th=[102237], 99.50th=[104334], 99.90th=[109577], 99.95th=[111674], 00:18:02.467 | 99.99th=[113771] 00:18:02.467 bw ( KiB/s): min= 264, max=40672, per=100.00%, avg=21844.33, stdev=9247.25, samples=24 00:18:02.467 iops : min= 66, max=10168, avg=5461.08, stdev=2311.81, samples=24 00:18:02.467 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.23% 00:18:02.467 lat (msec) : 2=12.58%, 4=8.22%, 10=16.72%, 20=8.36%, 50=46.52% 00:18:02.467 lat (msec) : 100=5.11%, 250=2.24%, 500=0.01% 00:18:02.467 cpu : usr=99.06%, sys=0.23%, ctx=78, majf=0, minf=5536 00:18:02.467 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:02.467 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:02.467 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:02.467 issued rwts: total=65142,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:02.467 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:02.467 00:18:02.467 Run status group 0 (all jobs): 00:18:02.467 READ: bw=17.2MiB/s (18.0MB/s), 8813KiB/s-8863KiB/s (9025kB/s-9076kB/s), io=509MiB (534MB), run=29399-29611msec 00:18:02.468 WRITE: bw=20.3MiB/s (21.3MB/s), 10.1MiB/s-13.3MiB/s (10.6MB/s-14.0MB/s), io=512MiB (537MB), run=19189-25257msec 00:18:03.402 ----------------------------------------------------- 00:18:03.402 Suppressions used: 00:18:03.402 count bytes template 00:18:03.402 2 10 /usr/src/fio/parse.c 00:18:03.402 2 192 /usr/src/fio/iolog.c 00:18:03.402 1 8 libtcmalloc_minimal.so 00:18:03.402 1 904 libcrypto.so 00:18:03.402 ----------------------------------------------------- 00:18:03.402 00:18:03.402 18:08:19 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:03.402 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:03.402 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:03.402 18:08:19 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:03.402 18:08:19 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:03.402 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:03.402 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:03.402 18:08:19 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1358 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local sanitizers 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1342 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # shift 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # local asan_lib= 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # grep libasan 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # break 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:03.403 18:08:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:03.661 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:03.661 fio-3.35 00:18:03.661 Starting 1 thread 00:18:21.742 00:18:21.742 test: (groupid=0, jobs=1): err= 0: pid=74678: Mon Oct 28 18:08:37 2024 00:18:21.742 read: IOPS=6360, BW=24.8MiB/s (26.1MB/s)(255MiB/10251msec) 00:18:21.742 slat (usec): min=4, max=370, avg= 6.67, stdev= 3.15 00:18:21.742 clat (usec): min=776, max=39109, avg=20113.49, stdev=1165.11 00:18:21.742 lat (usec): min=781, max=39117, avg=20120.16, stdev=1165.11 00:18:21.742 clat percentiles (usec): 00:18:21.742 | 1.00th=[19006], 5.00th=[19268], 10.00th=[19530], 20.00th=[19530], 00:18:21.742 | 30.00th=[19792], 40.00th=[19792], 50.00th=[20055], 60.00th=[20055], 00:18:21.742 | 70.00th=[20317], 80.00th=[20317], 90.00th=[20841], 95.00th=[21103], 00:18:21.742 | 99.00th=[25297], 99.50th=[27919], 99.90th=[29754], 99.95th=[34341], 00:18:21.742 | 99.99th=[38011] 00:18:21.742 write: IOPS=11.5k, BW=44.9MiB/s (47.1MB/s)(256MiB/5696msec); 0 zone resets 00:18:21.742 slat (usec): min=5, max=484, avg= 9.33, stdev= 6.12 00:18:21.742 clat (usec): min=688, max=60947, avg=11064.82, stdev=13566.01 00:18:21.742 lat (usec): min=695, max=60957, avg=11074.15, stdev=13566.03 00:18:21.742 clat percentiles (usec): 00:18:21.742 | 1.00th=[ 947], 5.00th=[ 1139], 10.00th=[ 1270], 20.00th=[ 1467], 00:18:21.742 | 30.00th=[ 1680], 40.00th=[ 2212], 50.00th=[ 7373], 60.00th=[ 8586], 00:18:21.742 | 70.00th=[10159], 80.00th=[12387], 90.00th=[39060], 95.00th=[43254], 00:18:21.742 | 99.00th=[46924], 99.50th=[47973], 99.90th=[50594], 99.95th=[51643], 00:18:21.742 | 99.99th=[56361] 00:18:21.742 bw ( KiB/s): min=15856, max=66320, per=94.93%, avg=43690.67, stdev=12101.15, samples=12 00:18:21.742 iops : min= 3964, max=16580, avg=10922.67, stdev=3025.29, samples=12 00:18:21.742 lat (usec) : 750=0.02%, 1000=0.88% 00:18:21.742 lat (msec) : 2=18.21%, 4=1.83%, 10=13.76%, 20=33.73%, 50=31.46% 00:18:21.742 lat (msec) : 100=0.11% 00:18:21.742 cpu : usr=98.08%, sys=0.66%, ctx=34, majf=0, minf=5565 00:18:21.742 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:21.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:21.742 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:21.742 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:21.742 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:21.742 00:18:21.742 Run status group 0 (all jobs): 00:18:21.742 READ: bw=24.8MiB/s (26.1MB/s), 24.8MiB/s-24.8MiB/s (26.1MB/s-26.1MB/s), io=255MiB (267MB), run=10251-10251msec 00:18:21.742 WRITE: bw=44.9MiB/s (47.1MB/s), 44.9MiB/s-44.9MiB/s (47.1MB/s-47.1MB/s), io=256MiB (268MB), run=5696-5696msec 00:18:22.675 ----------------------------------------------------- 00:18:22.675 Suppressions used: 00:18:22.675 count bytes template 00:18:22.675 1 5 /usr/src/fio/parse.c 00:18:22.675 2 192 /usr/src/fio/iolog.c 00:18:22.675 1 8 libtcmalloc_minimal.so 00:18:22.675 1 904 libcrypto.so 00:18:22.675 ----------------------------------------------------- 00:18:22.675 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:22.675 Remove shared memory files 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid57975 /dev/shm/spdk_tgt_trace.pid72892 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:18:22.675 ************************************ 00:18:22.675 END TEST ftl_fio_basic 00:18:22.675 ************************************ 00:18:22.675 00:18:22.675 real 1m14.874s 00:18:22.675 user 2m47.888s 00:18:22.675 sys 0m3.711s 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:22.675 18:08:39 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:22.675 18:08:39 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:22.675 18:08:39 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:22.675 18:08:39 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:22.676 18:08:39 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:22.676 ************************************ 00:18:22.676 START TEST ftl_bdevperf 00:18:22.676 ************************************ 00:18:22.676 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:18:22.934 * Looking for test storage... 00:18:22.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:22.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.934 --rc genhtml_branch_coverage=1 00:18:22.934 --rc genhtml_function_coverage=1 00:18:22.934 --rc genhtml_legend=1 00:18:22.934 --rc geninfo_all_blocks=1 00:18:22.934 --rc geninfo_unexecuted_blocks=1 00:18:22.934 00:18:22.934 ' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:22.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.934 --rc genhtml_branch_coverage=1 00:18:22.934 --rc genhtml_function_coverage=1 00:18:22.934 --rc genhtml_legend=1 00:18:22.934 --rc geninfo_all_blocks=1 00:18:22.934 --rc geninfo_unexecuted_blocks=1 00:18:22.934 00:18:22.934 ' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:22.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.934 --rc genhtml_branch_coverage=1 00:18:22.934 --rc genhtml_function_coverage=1 00:18:22.934 --rc genhtml_legend=1 00:18:22.934 --rc geninfo_all_blocks=1 00:18:22.934 --rc geninfo_unexecuted_blocks=1 00:18:22.934 00:18:22.934 ' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:22.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:22.934 --rc genhtml_branch_coverage=1 00:18:22.934 --rc genhtml_function_coverage=1 00:18:22.934 --rc genhtml_legend=1 00:18:22.934 --rc geninfo_all_blocks=1 00:18:22.934 --rc geninfo_unexecuted_blocks=1 00:18:22.934 00:18:22.934 ' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=74933 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 74933 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 74933 ']' 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:22.934 18:08:39 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:23.192 [2024-10-28 18:08:39.420073] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:18:23.192 [2024-10-28 18:08:39.420428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74933 ] 00:18:23.192 [2024-10-28 18:08:39.592142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.449 [2024-10-28 18:08:39.697052] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:24.048 18:08:40 ftl.ftl_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:24.048 18:08:40 ftl.ftl_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:18:24.048 18:08:40 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:24.048 18:08:40 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:18:24.048 18:08:40 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:24.048 18:08:40 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:18:24.048 18:08:40 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:18:24.048 18:08:40 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:24.305 18:08:40 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:24.305 18:08:40 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:18:24.305 18:08:40 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:24.305 18:08:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:24.305 18:08:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:24.305 18:08:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:18:24.305 18:08:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:18:24.305 18:08:40 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:24.870 { 00:18:24.870 "name": "nvme0n1", 00:18:24.870 "aliases": [ 00:18:24.870 "5b3ea80f-af3c-4441-bfef-ad5b53b73c70" 00:18:24.870 ], 00:18:24.870 "product_name": "NVMe disk", 00:18:24.870 "block_size": 4096, 00:18:24.870 "num_blocks": 1310720, 00:18:24.870 "uuid": "5b3ea80f-af3c-4441-bfef-ad5b53b73c70", 00:18:24.870 "numa_id": -1, 00:18:24.870 "assigned_rate_limits": { 00:18:24.870 "rw_ios_per_sec": 0, 00:18:24.870 "rw_mbytes_per_sec": 0, 00:18:24.870 "r_mbytes_per_sec": 0, 00:18:24.870 "w_mbytes_per_sec": 0 00:18:24.870 }, 00:18:24.870 "claimed": true, 00:18:24.870 "claim_type": "read_many_write_one", 00:18:24.870 "zoned": false, 00:18:24.870 "supported_io_types": { 00:18:24.870 "read": true, 00:18:24.870 "write": true, 00:18:24.870 "unmap": true, 00:18:24.870 "flush": true, 00:18:24.870 "reset": true, 00:18:24.870 "nvme_admin": true, 00:18:24.870 "nvme_io": true, 00:18:24.870 "nvme_io_md": false, 00:18:24.870 "write_zeroes": true, 00:18:24.870 "zcopy": false, 00:18:24.870 "get_zone_info": false, 00:18:24.870 "zone_management": false, 00:18:24.870 "zone_append": false, 00:18:24.870 "compare": true, 00:18:24.870 "compare_and_write": false, 00:18:24.870 "abort": true, 00:18:24.870 "seek_hole": false, 00:18:24.870 "seek_data": false, 00:18:24.870 "copy": true, 00:18:24.870 "nvme_iov_md": false 00:18:24.870 }, 00:18:24.870 "driver_specific": { 00:18:24.870 "nvme": [ 00:18:24.870 { 00:18:24.870 "pci_address": "0000:00:11.0", 00:18:24.870 "trid": { 00:18:24.870 "trtype": "PCIe", 00:18:24.870 "traddr": "0000:00:11.0" 00:18:24.870 }, 00:18:24.870 "ctrlr_data": { 00:18:24.870 "cntlid": 0, 00:18:24.870 "vendor_id": "0x1b36", 00:18:24.870 "model_number": "QEMU NVMe Ctrl", 00:18:24.870 "serial_number": "12341", 00:18:24.870 "firmware_revision": "8.0.0", 00:18:24.870 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:24.870 "oacs": { 00:18:24.870 "security": 0, 00:18:24.870 "format": 1, 00:18:24.870 "firmware": 0, 00:18:24.870 "ns_manage": 1 00:18:24.870 }, 00:18:24.870 "multi_ctrlr": false, 00:18:24.870 "ana_reporting": false 00:18:24.870 }, 00:18:24.870 "vs": { 00:18:24.870 "nvme_version": "1.4" 00:18:24.870 }, 00:18:24.870 "ns_data": { 00:18:24.870 "id": 1, 00:18:24.870 "can_share": false 00:18:24.870 } 00:18:24.870 } 00:18:24.870 ], 00:18:24.870 "mp_policy": "active_passive" 00:18:24.870 } 00:18:24.870 } 00:18:24.870 ]' 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 5120 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:24.870 18:08:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:25.128 18:08:41 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=391979f1-47aa-4747-9e0f-84b063e0af9e 00:18:25.128 18:08:41 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:18:25.128 18:08:41 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 391979f1-47aa-4747-9e0f-84b063e0af9e 00:18:25.385 18:08:41 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:25.643 18:08:42 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=5284815c-bd51-4ba0-b0cc-26fff67680a8 00:18:25.643 18:08:42 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 5284815c-bd51-4ba0-b0cc-26fff67680a8 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:18:25.901 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:26.158 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:26.158 { 00:18:26.158 "name": "dd804f27-caf9-4ed9-b778-464b2d599dbc", 00:18:26.158 "aliases": [ 00:18:26.158 "lvs/nvme0n1p0" 00:18:26.158 ], 00:18:26.158 "product_name": "Logical Volume", 00:18:26.158 "block_size": 4096, 00:18:26.158 "num_blocks": 26476544, 00:18:26.158 "uuid": "dd804f27-caf9-4ed9-b778-464b2d599dbc", 00:18:26.158 "assigned_rate_limits": { 00:18:26.158 "rw_ios_per_sec": 0, 00:18:26.158 "rw_mbytes_per_sec": 0, 00:18:26.158 "r_mbytes_per_sec": 0, 00:18:26.158 "w_mbytes_per_sec": 0 00:18:26.158 }, 00:18:26.158 "claimed": false, 00:18:26.158 "zoned": false, 00:18:26.158 "supported_io_types": { 00:18:26.158 "read": true, 00:18:26.158 "write": true, 00:18:26.158 "unmap": true, 00:18:26.158 "flush": false, 00:18:26.158 "reset": true, 00:18:26.158 "nvme_admin": false, 00:18:26.158 "nvme_io": false, 00:18:26.158 "nvme_io_md": false, 00:18:26.158 "write_zeroes": true, 00:18:26.158 "zcopy": false, 00:18:26.158 "get_zone_info": false, 00:18:26.158 "zone_management": false, 00:18:26.158 "zone_append": false, 00:18:26.158 "compare": false, 00:18:26.158 "compare_and_write": false, 00:18:26.158 "abort": false, 00:18:26.158 "seek_hole": true, 00:18:26.158 "seek_data": true, 00:18:26.158 "copy": false, 00:18:26.158 "nvme_iov_md": false 00:18:26.158 }, 00:18:26.158 "driver_specific": { 00:18:26.158 "lvol": { 00:18:26.158 "lvol_store_uuid": "5284815c-bd51-4ba0-b0cc-26fff67680a8", 00:18:26.158 "base_bdev": "nvme0n1", 00:18:26.158 "thin_provision": true, 00:18:26.158 "num_allocated_clusters": 0, 00:18:26.158 "snapshot": false, 00:18:26.158 "clone": false, 00:18:26.158 "esnap_clone": false 00:18:26.158 } 00:18:26.158 } 00:18:26.158 } 00:18:26.158 ]' 00:18:26.158 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:26.416 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:18:26.416 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:26.416 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:26.416 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:26.416 18:08:42 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:18:26.416 18:08:42 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:18:26.416 18:08:42 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:18:26.416 18:08:42 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:26.673 18:08:43 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:26.673 18:08:43 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:26.673 18:08:43 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:26.673 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:26.673 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:26.673 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:18:26.673 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:18:26.673 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:26.931 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:26.931 { 00:18:26.931 "name": "dd804f27-caf9-4ed9-b778-464b2d599dbc", 00:18:26.931 "aliases": [ 00:18:26.931 "lvs/nvme0n1p0" 00:18:26.931 ], 00:18:26.931 "product_name": "Logical Volume", 00:18:26.931 "block_size": 4096, 00:18:26.931 "num_blocks": 26476544, 00:18:26.931 "uuid": "dd804f27-caf9-4ed9-b778-464b2d599dbc", 00:18:26.931 "assigned_rate_limits": { 00:18:26.931 "rw_ios_per_sec": 0, 00:18:26.931 "rw_mbytes_per_sec": 0, 00:18:26.931 "r_mbytes_per_sec": 0, 00:18:26.931 "w_mbytes_per_sec": 0 00:18:26.931 }, 00:18:26.931 "claimed": false, 00:18:26.931 "zoned": false, 00:18:26.931 "supported_io_types": { 00:18:26.931 "read": true, 00:18:26.931 "write": true, 00:18:26.931 "unmap": true, 00:18:26.931 "flush": false, 00:18:26.931 "reset": true, 00:18:26.931 "nvme_admin": false, 00:18:26.931 "nvme_io": false, 00:18:26.931 "nvme_io_md": false, 00:18:26.931 "write_zeroes": true, 00:18:26.931 "zcopy": false, 00:18:26.931 "get_zone_info": false, 00:18:26.931 "zone_management": false, 00:18:26.931 "zone_append": false, 00:18:26.931 "compare": false, 00:18:26.931 "compare_and_write": false, 00:18:26.931 "abort": false, 00:18:26.931 "seek_hole": true, 00:18:26.931 "seek_data": true, 00:18:26.931 "copy": false, 00:18:26.931 "nvme_iov_md": false 00:18:26.931 }, 00:18:26.931 "driver_specific": { 00:18:26.931 "lvol": { 00:18:26.931 "lvol_store_uuid": "5284815c-bd51-4ba0-b0cc-26fff67680a8", 00:18:26.931 "base_bdev": "nvme0n1", 00:18:26.931 "thin_provision": true, 00:18:26.931 "num_allocated_clusters": 0, 00:18:26.931 "snapshot": false, 00:18:26.931 "clone": false, 00:18:26.931 "esnap_clone": false 00:18:26.931 } 00:18:26.931 } 00:18:26.931 } 00:18:26.931 ]' 00:18:26.931 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:26.931 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:18:26.931 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:27.189 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:27.189 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:27.189 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:18:27.189 18:08:43 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:18:27.189 18:08:43 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:27.447 18:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:18:27.447 18:08:43 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:27.447 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1380 -- # local bdev_name=dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:27.447 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:27.447 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bs 00:18:27.447 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local nb 00:18:27.447 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b dd804f27-caf9-4ed9-b778-464b2d599dbc 00:18:27.705 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:27.705 { 00:18:27.705 "name": "dd804f27-caf9-4ed9-b778-464b2d599dbc", 00:18:27.705 "aliases": [ 00:18:27.705 "lvs/nvme0n1p0" 00:18:27.705 ], 00:18:27.705 "product_name": "Logical Volume", 00:18:27.705 "block_size": 4096, 00:18:27.705 "num_blocks": 26476544, 00:18:27.705 "uuid": "dd804f27-caf9-4ed9-b778-464b2d599dbc", 00:18:27.705 "assigned_rate_limits": { 00:18:27.705 "rw_ios_per_sec": 0, 00:18:27.705 "rw_mbytes_per_sec": 0, 00:18:27.706 "r_mbytes_per_sec": 0, 00:18:27.706 "w_mbytes_per_sec": 0 00:18:27.706 }, 00:18:27.706 "claimed": false, 00:18:27.706 "zoned": false, 00:18:27.706 "supported_io_types": { 00:18:27.706 "read": true, 00:18:27.706 "write": true, 00:18:27.706 "unmap": true, 00:18:27.706 "flush": false, 00:18:27.706 "reset": true, 00:18:27.706 "nvme_admin": false, 00:18:27.706 "nvme_io": false, 00:18:27.706 "nvme_io_md": false, 00:18:27.706 "write_zeroes": true, 00:18:27.706 "zcopy": false, 00:18:27.706 "get_zone_info": false, 00:18:27.706 "zone_management": false, 00:18:27.706 "zone_append": false, 00:18:27.706 "compare": false, 00:18:27.706 "compare_and_write": false, 00:18:27.706 "abort": false, 00:18:27.706 "seek_hole": true, 00:18:27.706 "seek_data": true, 00:18:27.706 "copy": false, 00:18:27.706 "nvme_iov_md": false 00:18:27.706 }, 00:18:27.706 "driver_specific": { 00:18:27.706 "lvol": { 00:18:27.706 "lvol_store_uuid": "5284815c-bd51-4ba0-b0cc-26fff67680a8", 00:18:27.706 "base_bdev": "nvme0n1", 00:18:27.706 "thin_provision": true, 00:18:27.706 "num_allocated_clusters": 0, 00:18:27.706 "snapshot": false, 00:18:27.706 "clone": false, 00:18:27.706 "esnap_clone": false 00:18:27.706 } 00:18:27.706 } 00:18:27.706 } 00:18:27.706 ]' 00:18:27.706 18:08:43 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:27.706 18:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # bs=4096 00:18:27.706 18:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:27.706 18:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:27.706 18:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:27.706 18:08:44 ftl.ftl_bdevperf -- common/autotest_common.sh@1390 -- # echo 103424 00:18:27.706 18:08:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:18:27.706 18:08:44 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d dd804f27-caf9-4ed9-b778-464b2d599dbc -c nvc0n1p0 --l2p_dram_limit 20 00:18:27.963 [2024-10-28 18:08:44.304428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.963 [2024-10-28 18:08:44.304709] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:27.963 [2024-10-28 18:08:44.304745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:27.963 [2024-10-28 18:08:44.304762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.963 [2024-10-28 18:08:44.304879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.963 [2024-10-28 18:08:44.304906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:27.963 [2024-10-28 18:08:44.304920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:18:27.963 [2024-10-28 18:08:44.304935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.963 [2024-10-28 18:08:44.304965] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:27.963 [2024-10-28 18:08:44.305973] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:27.963 [2024-10-28 18:08:44.306001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.963 [2024-10-28 18:08:44.306016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:27.963 [2024-10-28 18:08:44.306031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.044 ms 00:18:27.963 [2024-10-28 18:08:44.306046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.963 [2024-10-28 18:08:44.306185] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 46e175d2-7897-42b3-8c17-67c7688797de 00:18:27.963 [2024-10-28 18:08:44.307272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.963 [2024-10-28 18:08:44.307307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:27.963 [2024-10-28 18:08:44.307326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:27.963 [2024-10-28 18:08:44.307342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.963 [2024-10-28 18:08:44.312191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.963 [2024-10-28 18:08:44.312265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:27.963 [2024-10-28 18:08:44.312288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.790 ms 00:18:27.963 [2024-10-28 18:08:44.312301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.963 [2024-10-28 18:08:44.312467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.964 [2024-10-28 18:08:44.312490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:27.964 [2024-10-28 18:08:44.312514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.119 ms 00:18:27.964 [2024-10-28 18:08:44.312526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.964 [2024-10-28 18:08:44.312620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.964 [2024-10-28 18:08:44.312640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:27.964 [2024-10-28 18:08:44.312655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:18:27.964 [2024-10-28 18:08:44.312667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.964 [2024-10-28 18:08:44.312703] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:27.964 [2024-10-28 18:08:44.317438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.964 [2024-10-28 18:08:44.317485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:27.964 [2024-10-28 18:08:44.317502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.748 ms 00:18:27.964 [2024-10-28 18:08:44.317520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.964 [2024-10-28 18:08:44.317565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.964 [2024-10-28 18:08:44.317584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:27.964 [2024-10-28 18:08:44.317597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:18:27.964 [2024-10-28 18:08:44.317611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.964 [2024-10-28 18:08:44.317669] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:27.964 [2024-10-28 18:08:44.317860] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:27.964 [2024-10-28 18:08:44.317894] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:27.964 [2024-10-28 18:08:44.317913] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:27.964 [2024-10-28 18:08:44.317930] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:27.964 [2024-10-28 18:08:44.317947] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:27.964 [2024-10-28 18:08:44.317960] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:18:27.964 [2024-10-28 18:08:44.317973] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:27.964 [2024-10-28 18:08:44.317985] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:27.964 [2024-10-28 18:08:44.317998] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:27.964 [2024-10-28 18:08:44.318011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.964 [2024-10-28 18:08:44.318027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:27.964 [2024-10-28 18:08:44.318040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.344 ms 00:18:27.964 [2024-10-28 18:08:44.318054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.964 [2024-10-28 18:08:44.318151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.964 [2024-10-28 18:08:44.318170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:27.964 [2024-10-28 18:08:44.318184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:18:27.964 [2024-10-28 18:08:44.318200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.964 [2024-10-28 18:08:44.318304] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:27.964 [2024-10-28 18:08:44.318323] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:27.964 [2024-10-28 18:08:44.318339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:27.964 [2024-10-28 18:08:44.318354] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:27.964 [2024-10-28 18:08:44.318390] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318402] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:18:27.964 [2024-10-28 18:08:44.318415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:27.964 [2024-10-28 18:08:44.318427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318440] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:27.964 [2024-10-28 18:08:44.318452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:27.964 [2024-10-28 18:08:44.318465] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:18:27.964 [2024-10-28 18:08:44.318476] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:27.964 [2024-10-28 18:08:44.318505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:27.964 [2024-10-28 18:08:44.318517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:18:27.964 [2024-10-28 18:08:44.318533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318545] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:27.964 [2024-10-28 18:08:44.318558] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:18:27.964 [2024-10-28 18:08:44.318570] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:27.964 [2024-10-28 18:08:44.318597] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318611] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.964 [2024-10-28 18:08:44.318622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:27.964 [2024-10-28 18:08:44.318636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318647] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.964 [2024-10-28 18:08:44.318660] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:27.964 [2024-10-28 18:08:44.318671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318685] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.964 [2024-10-28 18:08:44.318699] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:27.964 [2024-10-28 18:08:44.318713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318724] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:27.964 [2024-10-28 18:08:44.318739] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:27.964 [2024-10-28 18:08:44.318751] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:27.964 [2024-10-28 18:08:44.318777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:27.964 [2024-10-28 18:08:44.318790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:18:27.964 [2024-10-28 18:08:44.318801] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:27.964 [2024-10-28 18:08:44.318815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:27.964 [2024-10-28 18:08:44.318826] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:18:27.964 [2024-10-28 18:08:44.318856] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:27.964 [2024-10-28 18:08:44.318884] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:18:27.964 [2024-10-28 18:08:44.318895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318908] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:27.964 [2024-10-28 18:08:44.318920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:27.964 [2024-10-28 18:08:44.318934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:27.964 [2024-10-28 18:08:44.318945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:27.964 [2024-10-28 18:08:44.318964] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:27.964 [2024-10-28 18:08:44.318976] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:27.964 [2024-10-28 18:08:44.318990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:27.964 [2024-10-28 18:08:44.319001] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:27.964 [2024-10-28 18:08:44.319014] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:27.964 [2024-10-28 18:08:44.319026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:27.964 [2024-10-28 18:08:44.319044] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:27.964 [2024-10-28 18:08:44.319059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:27.964 [2024-10-28 18:08:44.319074] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:18:27.964 [2024-10-28 18:08:44.319085] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:18:27.964 [2024-10-28 18:08:44.319099] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:18:27.964 [2024-10-28 18:08:44.319111] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:18:27.964 [2024-10-28 18:08:44.319125] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:18:27.964 [2024-10-28 18:08:44.319137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:18:27.964 [2024-10-28 18:08:44.319151] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:18:27.964 [2024-10-28 18:08:44.319163] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:18:27.964 [2024-10-28 18:08:44.319179] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:18:27.964 [2024-10-28 18:08:44.319191] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:18:27.964 [2024-10-28 18:08:44.319204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:18:27.964 [2024-10-28 18:08:44.319216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:18:27.964 [2024-10-28 18:08:44.319229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:18:27.964 [2024-10-28 18:08:44.319241] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:18:27.964 [2024-10-28 18:08:44.319255] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:27.965 [2024-10-28 18:08:44.319268] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:27.965 [2024-10-28 18:08:44.319284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:27.965 [2024-10-28 18:08:44.319296] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:27.965 [2024-10-28 18:08:44.319310] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:27.965 [2024-10-28 18:08:44.319322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:27.965 [2024-10-28 18:08:44.319337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:27.965 [2024-10-28 18:08:44.319352] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:27.965 [2024-10-28 18:08:44.319366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.100 ms 00:18:27.965 [2024-10-28 18:08:44.319378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:27.965 [2024-10-28 18:08:44.319429] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:27.965 [2024-10-28 18:08:44.319446] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:29.863 [2024-10-28 18:08:46.194672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.863 [2024-10-28 18:08:46.195041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:29.863 [2024-10-28 18:08:46.195088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1875.233 ms 00:18:29.863 [2024-10-28 18:08:46.195104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.863 [2024-10-28 18:08:46.228008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.863 [2024-10-28 18:08:46.228078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:29.863 [2024-10-28 18:08:46.228104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.583 ms 00:18:29.863 [2024-10-28 18:08:46.228117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.863 [2024-10-28 18:08:46.228341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.863 [2024-10-28 18:08:46.228368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:29.863 [2024-10-28 18:08:46.228390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:18:29.863 [2024-10-28 18:08:46.228402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.863 [2024-10-28 18:08:46.276678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.863 [2024-10-28 18:08:46.276753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:29.863 [2024-10-28 18:08:46.276783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.204 ms 00:18:29.863 [2024-10-28 18:08:46.276796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.863 [2024-10-28 18:08:46.276900] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.863 [2024-10-28 18:08:46.276927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:29.863 [2024-10-28 18:08:46.276945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:29.863 [2024-10-28 18:08:46.276957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.863 [2024-10-28 18:08:46.277405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.863 [2024-10-28 18:08:46.277433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:29.863 [2024-10-28 18:08:46.277451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:18:29.863 [2024-10-28 18:08:46.277463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.863 [2024-10-28 18:08:46.277633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.863 [2024-10-28 18:08:46.277655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:29.863 [2024-10-28 18:08:46.277682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.140 ms 00:18:29.863 [2024-10-28 18:08:46.277695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.863 [2024-10-28 18:08:46.294665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.863 [2024-10-28 18:08:46.294733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:29.863 [2024-10-28 18:08:46.294760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.940 ms 00:18:29.863 [2024-10-28 18:08:46.294774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:29.863 [2024-10-28 18:08:46.308495] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:18:29.863 [2024-10-28 18:08:46.313602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:29.863 [2024-10-28 18:08:46.313663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:29.863 [2024-10-28 18:08:46.313685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.625 ms 00:18:29.863 [2024-10-28 18:08:46.313701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.121 [2024-10-28 18:08:46.368363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.121 [2024-10-28 18:08:46.368454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:30.121 [2024-10-28 18:08:46.368478] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 54.601 ms 00:18:30.121 [2024-10-28 18:08:46.368494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.121 [2024-10-28 18:08:46.368788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.121 [2024-10-28 18:08:46.368823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:30.121 [2024-10-28 18:08:46.368863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.197 ms 00:18:30.121 [2024-10-28 18:08:46.368882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.121 [2024-10-28 18:08:46.401365] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.121 [2024-10-28 18:08:46.401460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:30.121 [2024-10-28 18:08:46.401484] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.381 ms 00:18:30.121 [2024-10-28 18:08:46.401500] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.121 [2024-10-28 18:08:46.433436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.121 [2024-10-28 18:08:46.433536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:30.121 [2024-10-28 18:08:46.433560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.842 ms 00:18:30.121 [2024-10-28 18:08:46.433575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.121 [2024-10-28 18:08:46.434392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.121 [2024-10-28 18:08:46.434432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:30.121 [2024-10-28 18:08:46.434451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.718 ms 00:18:30.121 [2024-10-28 18:08:46.434466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.121 [2024-10-28 18:08:46.516987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.121 [2024-10-28 18:08:46.517063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:30.121 [2024-10-28 18:08:46.517086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.422 ms 00:18:30.121 [2024-10-28 18:08:46.517102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.121 [2024-10-28 18:08:46.550276] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.121 [2024-10-28 18:08:46.550359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:30.121 [2024-10-28 18:08:46.550382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.011 ms 00:18:30.121 [2024-10-28 18:08:46.550401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.121 [2024-10-28 18:08:46.582942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.121 [2024-10-28 18:08:46.583173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:30.121 [2024-10-28 18:08:46.583207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.454 ms 00:18:30.121 [2024-10-28 18:08:46.583224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.380 [2024-10-28 18:08:46.616254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.380 [2024-10-28 18:08:46.616328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:30.380 [2024-10-28 18:08:46.616351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.945 ms 00:18:30.380 [2024-10-28 18:08:46.616367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.380 [2024-10-28 18:08:46.616474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.380 [2024-10-28 18:08:46.616508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:30.380 [2024-10-28 18:08:46.616524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:18:30.380 [2024-10-28 18:08:46.616539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.380 [2024-10-28 18:08:46.616718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:30.380 [2024-10-28 18:08:46.616746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:30.380 [2024-10-28 18:08:46.616759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:18:30.380 [2024-10-28 18:08:46.616775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:30.380 [2024-10-28 18:08:46.618282] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2313.070 ms, result 0 00:18:30.381 { 00:18:30.381 "name": "ftl0", 00:18:30.381 "uuid": "46e175d2-7897-42b3-8c17-67c7688797de" 00:18:30.381 } 00:18:30.381 18:08:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:18:30.381 18:08:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:18:30.381 18:08:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:18:30.642 18:08:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:18:30.642 [2024-10-28 18:08:47.082422] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:30.642 I/O size of 69632 is greater than zero copy threshold (65536). 00:18:30.642 Zero copy mechanism will not be used. 00:18:30.642 Running I/O for 4 seconds... 00:18:32.947 1946.00 IOPS, 129.23 MiB/s [2024-10-28T18:08:50.366Z] 1948.50 IOPS, 129.39 MiB/s [2024-10-28T18:08:51.296Z] 1958.67 IOPS, 130.07 MiB/s [2024-10-28T18:08:51.296Z] 1943.50 IOPS, 129.06 MiB/s 00:18:34.818 Latency(us) 00:18:34.818 [2024-10-28T18:08:51.296Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.818 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:18:34.818 ftl0 : 4.00 1942.73 129.01 0.00 0.00 537.15 240.17 2666.12 00:18:34.818 [2024-10-28T18:08:51.296Z] =================================================================================================================== 00:18:34.818 [2024-10-28T18:08:51.296Z] Total : 1942.73 129.01 0.00 0.00 537.15 240.17 2666.12 00:18:34.818 [2024-10-28 18:08:51.094949] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:34.818 { 00:18:34.818 "results": [ 00:18:34.818 { 00:18:34.818 "job": "ftl0", 00:18:34.818 "core_mask": "0x1", 00:18:34.818 "workload": "randwrite", 00:18:34.818 "status": "finished", 00:18:34.818 "queue_depth": 1, 00:18:34.818 "io_size": 69632, 00:18:34.818 "runtime": 4.002095, 00:18:34.818 "iops": 1942.7324938563427, 00:18:34.818 "mibps": 129.00957967014776, 00:18:34.818 "io_failed": 0, 00:18:34.818 "io_timeout": 0, 00:18:34.818 "avg_latency_us": 537.1499907629349, 00:18:34.818 "min_latency_us": 240.17454545454547, 00:18:34.818 "max_latency_us": 2666.1236363636363 00:18:34.818 } 00:18:34.818 ], 00:18:34.818 "core_count": 1 00:18:34.818 } 00:18:34.818 18:08:51 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:18:34.818 [2024-10-28 18:08:51.277912] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:34.818 Running I/O for 4 seconds... 00:18:37.140 7759.00 IOPS, 30.31 MiB/s [2024-10-28T18:08:54.552Z] 7319.50 IOPS, 28.59 MiB/s [2024-10-28T18:08:55.486Z] 7409.33 IOPS, 28.94 MiB/s [2024-10-28T18:08:55.486Z] 7263.25 IOPS, 28.37 MiB/s 00:18:39.008 Latency(us) 00:18:39.008 [2024-10-28T18:08:55.486Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.008 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.008 ftl0 : 4.02 7252.01 28.33 0.00 0.00 17596.32 340.71 35270.28 00:18:39.008 [2024-10-28T18:08:55.486Z] =================================================================================================================== 00:18:39.008 [2024-10-28T18:08:55.486Z] Total : 7252.01 28.33 0.00 0.00 17596.32 0.00 35270.28 00:18:39.008 { 00:18:39.008 "results": [ 00:18:39.008 { 00:18:39.008 "job": "ftl0", 00:18:39.008 "core_mask": "0x1", 00:18:39.008 "workload": "randwrite", 00:18:39.008 "status": "finished", 00:18:39.008 "queue_depth": 128, 00:18:39.008 "io_size": 4096, 00:18:39.008 "runtime": 4.023435, 00:18:39.008 "iops": 7252.012273095004, 00:18:39.008 "mibps": 28.32817294177736, 00:18:39.008 "io_failed": 0, 00:18:39.008 "io_timeout": 0, 00:18:39.008 "avg_latency_us": 17596.320822537527, 00:18:39.008 "min_latency_us": 340.71272727272725, 00:18:39.008 "max_latency_us": 35270.28363636364 00:18:39.008 } 00:18:39.008 ], 00:18:39.008 "core_count": 1 00:18:39.008 } 00:18:39.008 [2024-10-28 18:08:55.312678] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:39.008 18:08:55 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:18:39.008 [2024-10-28 18:08:55.460759] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:18:39.008 Running I/O for 4 seconds... 00:18:41.342 6076.00 IOPS, 23.73 MiB/s [2024-10-28T18:08:58.754Z] 5855.00 IOPS, 22.87 MiB/s [2024-10-28T18:08:59.688Z] 5869.00 IOPS, 22.93 MiB/s [2024-10-28T18:08:59.689Z] 5874.75 IOPS, 22.95 MiB/s 00:18:43.211 Latency(us) 00:18:43.211 [2024-10-28T18:08:59.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:43.211 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:43.211 Verification LBA range: start 0x0 length 0x1400000 00:18:43.211 ftl0 : 4.01 5887.11 23.00 0.00 0.00 21668.30 379.81 31457.28 00:18:43.211 [2024-10-28T18:08:59.689Z] =================================================================================================================== 00:18:43.211 [2024-10-28T18:08:59.689Z] Total : 5887.11 23.00 0.00 0.00 21668.30 0.00 31457.28 00:18:43.211 [2024-10-28 18:08:59.492611] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:18:43.211 { 00:18:43.211 "results": [ 00:18:43.211 { 00:18:43.211 "job": "ftl0", 00:18:43.211 "core_mask": "0x1", 00:18:43.211 "workload": "verify", 00:18:43.211 "status": "finished", 00:18:43.211 "verify_range": { 00:18:43.211 "start": 0, 00:18:43.211 "length": 20971520 00:18:43.211 }, 00:18:43.211 "queue_depth": 128, 00:18:43.211 "io_size": 4096, 00:18:43.211 "runtime": 4.013347, 00:18:43.211 "iops": 5887.106198392514, 00:18:43.211 "mibps": 22.996508587470757, 00:18:43.211 "io_failed": 0, 00:18:43.211 "io_timeout": 0, 00:18:43.211 "avg_latency_us": 21668.2977912019, 00:18:43.211 "min_latency_us": 379.8109090909091, 00:18:43.211 "max_latency_us": 31457.28 00:18:43.211 } 00:18:43.211 ], 00:18:43.211 "core_count": 1 00:18:43.211 } 00:18:43.211 18:08:59 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:18:43.468 [2024-10-28 18:08:59.818224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.468 [2024-10-28 18:08:59.818296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:43.468 [2024-10-28 18:08:59.818322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:18:43.468 [2024-10-28 18:08:59.818338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.468 [2024-10-28 18:08:59.818372] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:43.468 [2024-10-28 18:08:59.821774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.468 [2024-10-28 18:08:59.821816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:43.468 [2024-10-28 18:08:59.821851] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.371 ms 00:18:43.468 [2024-10-28 18:08:59.821868] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.468 [2024-10-28 18:08:59.823432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.468 [2024-10-28 18:08:59.823478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:43.468 [2024-10-28 18:08:59.823502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.527 ms 00:18:43.469 [2024-10-28 18:08:59.823516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.728 [2024-10-28 18:09:00.002336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.728 [2024-10-28 18:09:00.002415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:43.728 [2024-10-28 18:09:00.002447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 178.775 ms 00:18:43.728 [2024-10-28 18:09:00.002461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.728 [2024-10-28 18:09:00.009210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.728 [2024-10-28 18:09:00.009460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:43.728 [2024-10-28 18:09:00.009500] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.689 ms 00:18:43.728 [2024-10-28 18:09:00.009515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.728 [2024-10-28 18:09:00.041771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.728 [2024-10-28 18:09:00.041851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:43.728 [2024-10-28 18:09:00.041878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.113 ms 00:18:43.728 [2024-10-28 18:09:00.041891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.728 [2024-10-28 18:09:00.061318] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.728 [2024-10-28 18:09:00.061395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:43.728 [2024-10-28 18:09:00.061426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.342 ms 00:18:43.728 [2024-10-28 18:09:00.061439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.728 [2024-10-28 18:09:00.061670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.728 [2024-10-28 18:09:00.061692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:43.728 [2024-10-28 18:09:00.061713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:43.728 [2024-10-28 18:09:00.061725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.728 [2024-10-28 18:09:00.093949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.728 [2024-10-28 18:09:00.094018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:43.728 [2024-10-28 18:09:00.094043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.190 ms 00:18:43.728 [2024-10-28 18:09:00.094056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.728 [2024-10-28 18:09:00.125926] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.728 [2024-10-28 18:09:00.126005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:43.728 [2024-10-28 18:09:00.126031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.785 ms 00:18:43.728 [2024-10-28 18:09:00.126044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.728 [2024-10-28 18:09:00.158643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.728 [2024-10-28 18:09:00.158725] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:43.728 [2024-10-28 18:09:00.158751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.496 ms 00:18:43.728 [2024-10-28 18:09:00.158764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.728 [2024-10-28 18:09:00.191142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.728 [2024-10-28 18:09:00.191221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:43.728 [2024-10-28 18:09:00.191250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.151 ms 00:18:43.728 [2024-10-28 18:09:00.191263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.728 [2024-10-28 18:09:00.191353] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:43.728 [2024-10-28 18:09:00.191379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191747] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:43.728 [2024-10-28 18:09:00.191887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.191901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.191914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.191940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.191953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.191967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192765] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:43.729 [2024-10-28 18:09:00.192829] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:43.729 [2024-10-28 18:09:00.192858] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 46e175d2-7897-42b3-8c17-67c7688797de 00:18:43.729 [2024-10-28 18:09:00.192872] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:43.729 [2024-10-28 18:09:00.192886] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:43.729 [2024-10-28 18:09:00.192900] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:43.729 [2024-10-28 18:09:00.192914] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:43.729 [2024-10-28 18:09:00.192925] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:43.729 [2024-10-28 18:09:00.192939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:43.729 [2024-10-28 18:09:00.192951] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:43.729 [2024-10-28 18:09:00.192965] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:43.729 [2024-10-28 18:09:00.192976] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:43.729 [2024-10-28 18:09:00.192990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.729 [2024-10-28 18:09:00.193002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:43.729 [2024-10-28 18:09:00.193018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.642 ms 00:18:43.729 [2024-10-28 18:09:00.193029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.210048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.988 [2024-10-28 18:09:00.210120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:43.988 [2024-10-28 18:09:00.210145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.924 ms 00:18:43.988 [2024-10-28 18:09:00.210158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.210618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:43.988 [2024-10-28 18:09:00.210639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:43.988 [2024-10-28 18:09:00.210656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:18:43.988 [2024-10-28 18:09:00.210669] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.257185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.257258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:43.988 [2024-10-28 18:09:00.257284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.257297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.257393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.257411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:43.988 [2024-10-28 18:09:00.257426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.257438] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.257599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.257624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:43.988 [2024-10-28 18:09:00.257640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.257652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.257679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.257693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:43.988 [2024-10-28 18:09:00.257708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.257720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.362292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.362376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:43.988 [2024-10-28 18:09:00.362404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.362417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.448467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.448542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:43.988 [2024-10-28 18:09:00.448566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.448579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.448722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.448742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:43.988 [2024-10-28 18:09:00.448762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.448774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.448870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.448892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:43.988 [2024-10-28 18:09:00.448907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.448919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.449062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.449082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:43.988 [2024-10-28 18:09:00.449105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.449116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.449174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.449192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:43.988 [2024-10-28 18:09:00.449207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.449218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.449267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.449290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:43.988 [2024-10-28 18:09:00.449306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.449321] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.449391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:43.988 [2024-10-28 18:09:00.449423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:43.988 [2024-10-28 18:09:00.449440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:43.988 [2024-10-28 18:09:00.449452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:43.988 [2024-10-28 18:09:00.449664] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 631.364 ms, result 0 00:18:43.988 true 00:18:44.247 18:09:00 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 74933 00:18:44.247 18:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 74933 ']' 00:18:44.247 18:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@956 -- # kill -0 74933 00:18:44.247 18:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # uname 00:18:44.247 18:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:44.247 18:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 74933 00:18:44.247 18:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:44.247 killing process with pid 74933 00:18:44.247 Received shutdown signal, test time was about 4.000000 seconds 00:18:44.247 00:18:44.247 Latency(us) 00:18:44.247 [2024-10-28T18:09:00.725Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.247 [2024-10-28T18:09:00.725Z] =================================================================================================================== 00:18:44.247 [2024-10-28T18:09:00.725Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:44.247 18:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:44.247 18:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 74933' 00:18:44.247 18:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@971 -- # kill 74933 00:18:44.247 18:09:00 ftl.ftl_bdevperf -- common/autotest_common.sh@976 -- # wait 74933 00:18:47.529 Remove shared memory files 00:18:47.529 18:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:47.529 18:09:03 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:18:47.529 18:09:03 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:18:47.529 18:09:03 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:18:47.529 18:09:03 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:18:47.529 18:09:03 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:18:47.529 18:09:03 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:18:47.529 18:09:03 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:18:47.529 ************************************ 00:18:47.529 END TEST ftl_bdevperf 00:18:47.529 ************************************ 00:18:47.529 00:18:47.529 real 0m24.721s 00:18:47.530 user 0m28.751s 00:18:47.530 sys 0m1.085s 00:18:47.530 18:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:18:47.530 18:09:03 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:18:47.530 18:09:03 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:47.530 18:09:03 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:18:47.530 18:09:03 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:18:47.530 18:09:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:18:47.530 ************************************ 00:18:47.530 START TEST ftl_trim 00:18:47.530 ************************************ 00:18:47.530 18:09:03 ftl.ftl_trim -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:18:47.530 * Looking for test storage... 00:18:47.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:18:47.530 18:09:03 ftl.ftl_trim -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:18:47.530 18:09:03 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lcov --version 00:18:47.530 18:09:03 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:18:47.787 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:18:47.787 18:09:04 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:18:47.788 18:09:04 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:18:47.788 18:09:04 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:47.788 18:09:04 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:18:47.788 18:09:04 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:18:47.788 18:09:04 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:47.788 18:09:04 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:47.788 18:09:04 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:18:47.788 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:47.788 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:18:47.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.788 --rc genhtml_branch_coverage=1 00:18:47.788 --rc genhtml_function_coverage=1 00:18:47.788 --rc genhtml_legend=1 00:18:47.788 --rc geninfo_all_blocks=1 00:18:47.788 --rc geninfo_unexecuted_blocks=1 00:18:47.788 00:18:47.788 ' 00:18:47.788 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:18:47.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.788 --rc genhtml_branch_coverage=1 00:18:47.788 --rc genhtml_function_coverage=1 00:18:47.788 --rc genhtml_legend=1 00:18:47.788 --rc geninfo_all_blocks=1 00:18:47.788 --rc geninfo_unexecuted_blocks=1 00:18:47.788 00:18:47.788 ' 00:18:47.788 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:18:47.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.788 --rc genhtml_branch_coverage=1 00:18:47.788 --rc genhtml_function_coverage=1 00:18:47.788 --rc genhtml_legend=1 00:18:47.788 --rc geninfo_all_blocks=1 00:18:47.788 --rc geninfo_unexecuted_blocks=1 00:18:47.788 00:18:47.788 ' 00:18:47.788 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:18:47.788 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:47.788 --rc genhtml_branch_coverage=1 00:18:47.788 --rc genhtml_function_coverage=1 00:18:47.788 --rc genhtml_legend=1 00:18:47.788 --rc geninfo_all_blocks=1 00:18:47.788 --rc geninfo_unexecuted_blocks=1 00:18:47.788 00:18:47.788 ' 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=75282 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 75282 00:18:47.788 18:09:04 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:47.788 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75282 ']' 00:18:47.788 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.788 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:18:47.788 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.788 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.788 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:18:47.788 18:09:04 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:18:47.788 [2024-10-28 18:09:04.224335] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:18:47.788 [2024-10-28 18:09:04.224726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75282 ] 00:18:48.047 [2024-10-28 18:09:04.414637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:48.305 [2024-10-28 18:09:04.557473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.305 [2024-10-28 18:09:04.557612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.305 [2024-10-28 18:09:04.557619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:48.872 18:09:05 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:18:48.872 18:09:05 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:18:48.872 18:09:05 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:18:48.872 18:09:05 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:18:48.872 18:09:05 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:18:48.872 18:09:05 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:18:48.872 18:09:05 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:18:48.872 18:09:05 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:18:49.436 18:09:05 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:18:49.436 18:09:05 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:18:49.436 18:09:05 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:18:49.436 18:09:05 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:18:49.436 18:09:05 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:49.436 18:09:05 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:18:49.436 18:09:05 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:18:49.436 18:09:05 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:18:49.694 18:09:06 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:49.694 { 00:18:49.694 "name": "nvme0n1", 00:18:49.694 "aliases": [ 00:18:49.694 "617483ee-efa8-4dd8-a125-fab3835818c1" 00:18:49.694 ], 00:18:49.694 "product_name": "NVMe disk", 00:18:49.694 "block_size": 4096, 00:18:49.694 "num_blocks": 1310720, 00:18:49.694 "uuid": "617483ee-efa8-4dd8-a125-fab3835818c1", 00:18:49.694 "numa_id": -1, 00:18:49.694 "assigned_rate_limits": { 00:18:49.694 "rw_ios_per_sec": 0, 00:18:49.694 "rw_mbytes_per_sec": 0, 00:18:49.694 "r_mbytes_per_sec": 0, 00:18:49.694 "w_mbytes_per_sec": 0 00:18:49.694 }, 00:18:49.694 "claimed": true, 00:18:49.694 "claim_type": "read_many_write_one", 00:18:49.694 "zoned": false, 00:18:49.694 "supported_io_types": { 00:18:49.694 "read": true, 00:18:49.694 "write": true, 00:18:49.694 "unmap": true, 00:18:49.694 "flush": true, 00:18:49.694 "reset": true, 00:18:49.694 "nvme_admin": true, 00:18:49.694 "nvme_io": true, 00:18:49.694 "nvme_io_md": false, 00:18:49.694 "write_zeroes": true, 00:18:49.694 "zcopy": false, 00:18:49.694 "get_zone_info": false, 00:18:49.694 "zone_management": false, 00:18:49.694 "zone_append": false, 00:18:49.694 "compare": true, 00:18:49.694 "compare_and_write": false, 00:18:49.694 "abort": true, 00:18:49.694 "seek_hole": false, 00:18:49.694 "seek_data": false, 00:18:49.694 "copy": true, 00:18:49.694 "nvme_iov_md": false 00:18:49.694 }, 00:18:49.694 "driver_specific": { 00:18:49.694 "nvme": [ 00:18:49.694 { 00:18:49.694 "pci_address": "0000:00:11.0", 00:18:49.694 "trid": { 00:18:49.694 "trtype": "PCIe", 00:18:49.694 "traddr": "0000:00:11.0" 00:18:49.694 }, 00:18:49.694 "ctrlr_data": { 00:18:49.694 "cntlid": 0, 00:18:49.694 "vendor_id": "0x1b36", 00:18:49.694 "model_number": "QEMU NVMe Ctrl", 00:18:49.694 "serial_number": "12341", 00:18:49.694 "firmware_revision": "8.0.0", 00:18:49.694 "subnqn": "nqn.2019-08.org.qemu:12341", 00:18:49.694 "oacs": { 00:18:49.694 "security": 0, 00:18:49.694 "format": 1, 00:18:49.694 "firmware": 0, 00:18:49.694 "ns_manage": 1 00:18:49.694 }, 00:18:49.694 "multi_ctrlr": false, 00:18:49.694 "ana_reporting": false 00:18:49.694 }, 00:18:49.694 "vs": { 00:18:49.694 "nvme_version": "1.4" 00:18:49.694 }, 00:18:49.694 "ns_data": { 00:18:49.694 "id": 1, 00:18:49.694 "can_share": false 00:18:49.694 } 00:18:49.694 } 00:18:49.694 ], 00:18:49.694 "mp_policy": "active_passive" 00:18:49.694 } 00:18:49.694 } 00:18:49.694 ]' 00:18:49.694 18:09:06 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:49.694 18:09:06 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:18:49.694 18:09:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:49.952 18:09:06 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=1310720 00:18:49.952 18:09:06 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:18:49.952 18:09:06 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 5120 00:18:49.952 18:09:06 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:18:49.952 18:09:06 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:18:49.952 18:09:06 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:18:49.952 18:09:06 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:18:49.952 18:09:06 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:18:50.210 18:09:06 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=5284815c-bd51-4ba0-b0cc-26fff67680a8 00:18:50.210 18:09:06 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:18:50.210 18:09:06 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5284815c-bd51-4ba0-b0cc-26fff67680a8 00:18:50.468 18:09:06 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:18:50.725 18:09:07 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=b8826329-ef81-47b5-b6ce-887246647610 00:18:50.725 18:09:07 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u b8826329-ef81-47b5-b6ce-887246647610 00:18:50.983 18:09:07 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:50.983 18:09:07 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:50.983 18:09:07 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:18:50.983 18:09:07 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:18:50.983 18:09:07 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:50.983 18:09:07 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:18:50.983 18:09:07 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:50.983 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:50.983 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:50.983 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:18:50.983 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:18:50.983 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:51.283 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:51.283 { 00:18:51.283 "name": "78828639-ef3b-4eca-b66b-f15e12c30e72", 00:18:51.283 "aliases": [ 00:18:51.283 "lvs/nvme0n1p0" 00:18:51.283 ], 00:18:51.283 "product_name": "Logical Volume", 00:18:51.283 "block_size": 4096, 00:18:51.283 "num_blocks": 26476544, 00:18:51.283 "uuid": "78828639-ef3b-4eca-b66b-f15e12c30e72", 00:18:51.283 "assigned_rate_limits": { 00:18:51.283 "rw_ios_per_sec": 0, 00:18:51.283 "rw_mbytes_per_sec": 0, 00:18:51.283 "r_mbytes_per_sec": 0, 00:18:51.283 "w_mbytes_per_sec": 0 00:18:51.283 }, 00:18:51.283 "claimed": false, 00:18:51.283 "zoned": false, 00:18:51.283 "supported_io_types": { 00:18:51.283 "read": true, 00:18:51.283 "write": true, 00:18:51.283 "unmap": true, 00:18:51.283 "flush": false, 00:18:51.283 "reset": true, 00:18:51.283 "nvme_admin": false, 00:18:51.283 "nvme_io": false, 00:18:51.283 "nvme_io_md": false, 00:18:51.283 "write_zeroes": true, 00:18:51.283 "zcopy": false, 00:18:51.283 "get_zone_info": false, 00:18:51.283 "zone_management": false, 00:18:51.283 "zone_append": false, 00:18:51.283 "compare": false, 00:18:51.283 "compare_and_write": false, 00:18:51.283 "abort": false, 00:18:51.283 "seek_hole": true, 00:18:51.283 "seek_data": true, 00:18:51.283 "copy": false, 00:18:51.283 "nvme_iov_md": false 00:18:51.283 }, 00:18:51.283 "driver_specific": { 00:18:51.283 "lvol": { 00:18:51.283 "lvol_store_uuid": "b8826329-ef81-47b5-b6ce-887246647610", 00:18:51.283 "base_bdev": "nvme0n1", 00:18:51.283 "thin_provision": true, 00:18:51.283 "num_allocated_clusters": 0, 00:18:51.283 "snapshot": false, 00:18:51.283 "clone": false, 00:18:51.283 "esnap_clone": false 00:18:51.283 } 00:18:51.283 } 00:18:51.283 } 00:18:51.283 ]' 00:18:51.283 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:51.283 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:18:51.540 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:51.540 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:51.540 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:51.540 18:09:07 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:18:51.540 18:09:07 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:18:51.540 18:09:07 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:18:51.540 18:09:07 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:18:51.798 18:09:08 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:18:51.798 18:09:08 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:18:51.798 18:09:08 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:51.798 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:51.798 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:51.798 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:18:51.798 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:18:51.798 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:52.056 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:52.056 { 00:18:52.056 "name": "78828639-ef3b-4eca-b66b-f15e12c30e72", 00:18:52.056 "aliases": [ 00:18:52.056 "lvs/nvme0n1p0" 00:18:52.056 ], 00:18:52.056 "product_name": "Logical Volume", 00:18:52.056 "block_size": 4096, 00:18:52.056 "num_blocks": 26476544, 00:18:52.056 "uuid": "78828639-ef3b-4eca-b66b-f15e12c30e72", 00:18:52.056 "assigned_rate_limits": { 00:18:52.056 "rw_ios_per_sec": 0, 00:18:52.056 "rw_mbytes_per_sec": 0, 00:18:52.056 "r_mbytes_per_sec": 0, 00:18:52.056 "w_mbytes_per_sec": 0 00:18:52.056 }, 00:18:52.056 "claimed": false, 00:18:52.056 "zoned": false, 00:18:52.056 "supported_io_types": { 00:18:52.056 "read": true, 00:18:52.056 "write": true, 00:18:52.056 "unmap": true, 00:18:52.056 "flush": false, 00:18:52.056 "reset": true, 00:18:52.056 "nvme_admin": false, 00:18:52.056 "nvme_io": false, 00:18:52.056 "nvme_io_md": false, 00:18:52.056 "write_zeroes": true, 00:18:52.056 "zcopy": false, 00:18:52.056 "get_zone_info": false, 00:18:52.056 "zone_management": false, 00:18:52.056 "zone_append": false, 00:18:52.056 "compare": false, 00:18:52.056 "compare_and_write": false, 00:18:52.056 "abort": false, 00:18:52.056 "seek_hole": true, 00:18:52.056 "seek_data": true, 00:18:52.056 "copy": false, 00:18:52.056 "nvme_iov_md": false 00:18:52.056 }, 00:18:52.056 "driver_specific": { 00:18:52.056 "lvol": { 00:18:52.056 "lvol_store_uuid": "b8826329-ef81-47b5-b6ce-887246647610", 00:18:52.056 "base_bdev": "nvme0n1", 00:18:52.056 "thin_provision": true, 00:18:52.056 "num_allocated_clusters": 0, 00:18:52.056 "snapshot": false, 00:18:52.056 "clone": false, 00:18:52.056 "esnap_clone": false 00:18:52.056 } 00:18:52.056 } 00:18:52.056 } 00:18:52.056 ]' 00:18:52.056 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:52.056 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:18:52.056 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:52.056 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:52.056 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:52.056 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:18:52.056 18:09:08 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:18:52.056 18:09:08 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:18:52.621 18:09:08 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:18:52.621 18:09:08 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:18:52.621 18:09:08 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:52.621 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1380 -- # local bdev_name=78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:52.621 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1381 -- # local bdev_info 00:18:52.621 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bs 00:18:52.621 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local nb 00:18:52.621 18:09:08 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 78828639-ef3b-4eca-b66b-f15e12c30e72 00:18:52.621 18:09:09 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:18:52.621 { 00:18:52.621 "name": "78828639-ef3b-4eca-b66b-f15e12c30e72", 00:18:52.622 "aliases": [ 00:18:52.622 "lvs/nvme0n1p0" 00:18:52.622 ], 00:18:52.622 "product_name": "Logical Volume", 00:18:52.622 "block_size": 4096, 00:18:52.622 "num_blocks": 26476544, 00:18:52.622 "uuid": "78828639-ef3b-4eca-b66b-f15e12c30e72", 00:18:52.622 "assigned_rate_limits": { 00:18:52.622 "rw_ios_per_sec": 0, 00:18:52.622 "rw_mbytes_per_sec": 0, 00:18:52.622 "r_mbytes_per_sec": 0, 00:18:52.622 "w_mbytes_per_sec": 0 00:18:52.622 }, 00:18:52.622 "claimed": false, 00:18:52.622 "zoned": false, 00:18:52.622 "supported_io_types": { 00:18:52.622 "read": true, 00:18:52.622 "write": true, 00:18:52.622 "unmap": true, 00:18:52.622 "flush": false, 00:18:52.622 "reset": true, 00:18:52.622 "nvme_admin": false, 00:18:52.622 "nvme_io": false, 00:18:52.622 "nvme_io_md": false, 00:18:52.622 "write_zeroes": true, 00:18:52.622 "zcopy": false, 00:18:52.622 "get_zone_info": false, 00:18:52.622 "zone_management": false, 00:18:52.622 "zone_append": false, 00:18:52.622 "compare": false, 00:18:52.622 "compare_and_write": false, 00:18:52.622 "abort": false, 00:18:52.622 "seek_hole": true, 00:18:52.622 "seek_data": true, 00:18:52.622 "copy": false, 00:18:52.622 "nvme_iov_md": false 00:18:52.622 }, 00:18:52.622 "driver_specific": { 00:18:52.622 "lvol": { 00:18:52.622 "lvol_store_uuid": "b8826329-ef81-47b5-b6ce-887246647610", 00:18:52.622 "base_bdev": "nvme0n1", 00:18:52.622 "thin_provision": true, 00:18:52.622 "num_allocated_clusters": 0, 00:18:52.622 "snapshot": false, 00:18:52.622 "clone": false, 00:18:52.622 "esnap_clone": false 00:18:52.622 } 00:18:52.622 } 00:18:52.622 } 00:18:52.622 ]' 00:18:52.622 18:09:09 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:18:52.879 18:09:09 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # bs=4096 00:18:52.879 18:09:09 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:18:52.879 18:09:09 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # nb=26476544 00:18:52.879 18:09:09 ftl.ftl_trim -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:18:52.879 18:09:09 ftl.ftl_trim -- common/autotest_common.sh@1390 -- # echo 103424 00:18:52.879 18:09:09 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:18:52.879 18:09:09 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 78828639-ef3b-4eca-b66b-f15e12c30e72 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:18:53.138 [2024-10-28 18:09:09.457865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.138 [2024-10-28 18:09:09.458135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:18:53.138 [2024-10-28 18:09:09.458179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:18:53.138 [2024-10-28 18:09:09.458196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.138 [2024-10-28 18:09:09.461762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.138 [2024-10-28 18:09:09.461980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:53.138 [2024-10-28 18:09:09.462018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.522 ms 00:18:53.138 [2024-10-28 18:09:09.462034] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.138 [2024-10-28 18:09:09.462247] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:18:53.138 [2024-10-28 18:09:09.463215] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:18:53.138 [2024-10-28 18:09:09.463256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.138 [2024-10-28 18:09:09.463273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:53.138 [2024-10-28 18:09:09.463289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.022 ms 00:18:53.138 [2024-10-28 18:09:09.463302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.138 [2024-10-28 18:09:09.463551] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0cdd57b4-e896-4e97-a9c9-3c575802f024 00:18:53.138 [2024-10-28 18:09:09.464675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.138 [2024-10-28 18:09:09.464721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:18:53.138 [2024-10-28 18:09:09.464740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:18:53.138 [2024-10-28 18:09:09.464755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.138 [2024-10-28 18:09:09.470028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.138 [2024-10-28 18:09:09.470101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:53.138 [2024-10-28 18:09:09.470128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.153 ms 00:18:53.138 [2024-10-28 18:09:09.470148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.138 [2024-10-28 18:09:09.470339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.138 [2024-10-28 18:09:09.470382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:53.138 [2024-10-28 18:09:09.470398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.096 ms 00:18:53.138 [2024-10-28 18:09:09.470417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.138 [2024-10-28 18:09:09.470476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.138 [2024-10-28 18:09:09.470508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:18:53.138 [2024-10-28 18:09:09.470532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:18:53.138 [2024-10-28 18:09:09.470551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.138 [2024-10-28 18:09:09.470608] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:18:53.138 [2024-10-28 18:09:09.475374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.138 [2024-10-28 18:09:09.475430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:53.138 [2024-10-28 18:09:09.475461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.773 ms 00:18:53.138 [2024-10-28 18:09:09.475475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.138 [2024-10-28 18:09:09.475599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.138 [2024-10-28 18:09:09.475639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:18:53.138 [2024-10-28 18:09:09.475673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:18:53.138 [2024-10-28 18:09:09.475705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.138 [2024-10-28 18:09:09.475750] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:18:53.138 [2024-10-28 18:09:09.475947] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:18:53.138 [2024-10-28 18:09:09.476010] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:18:53.138 [2024-10-28 18:09:09.476034] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:18:53.138 [2024-10-28 18:09:09.476052] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:18:53.138 [2024-10-28 18:09:09.476071] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:18:53.138 [2024-10-28 18:09:09.476099] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:18:53.138 [2024-10-28 18:09:09.476116] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:18:53.138 [2024-10-28 18:09:09.476132] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:18:53.138 [2024-10-28 18:09:09.476147] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:18:53.138 [2024-10-28 18:09:09.476169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.138 [2024-10-28 18:09:09.476182] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:18:53.138 [2024-10-28 18:09:09.476198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.418 ms 00:18:53.138 [2024-10-28 18:09:09.476210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.138 [2024-10-28 18:09:09.476344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.138 [2024-10-28 18:09:09.476379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:18:53.138 [2024-10-28 18:09:09.476397] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:18:53.138 [2024-10-28 18:09:09.476410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.138 [2024-10-28 18:09:09.476583] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:18:53.138 [2024-10-28 18:09:09.476621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:18:53.138 [2024-10-28 18:09:09.476641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.138 [2024-10-28 18:09:09.476655] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.138 [2024-10-28 18:09:09.476670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:18:53.138 [2024-10-28 18:09:09.476682] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:18:53.138 [2024-10-28 18:09:09.476696] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:18:53.138 [2024-10-28 18:09:09.476708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:18:53.139 [2024-10-28 18:09:09.476726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:18:53.139 [2024-10-28 18:09:09.476738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.139 [2024-10-28 18:09:09.476751] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:18:53.139 [2024-10-28 18:09:09.476763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:18:53.139 [2024-10-28 18:09:09.476776] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:18:53.139 [2024-10-28 18:09:09.476788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:18:53.139 [2024-10-28 18:09:09.476802] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:18:53.139 [2024-10-28 18:09:09.476813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.139 [2024-10-28 18:09:09.476829] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:18:53.139 [2024-10-28 18:09:09.476871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:18:53.139 [2024-10-28 18:09:09.476928] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.139 [2024-10-28 18:09:09.476954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:18:53.139 [2024-10-28 18:09:09.476984] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:18:53.139 [2024-10-28 18:09:09.477005] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.139 [2024-10-28 18:09:09.477020] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:18:53.139 [2024-10-28 18:09:09.477032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:18:53.139 [2024-10-28 18:09:09.477046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.139 [2024-10-28 18:09:09.477057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:18:53.139 [2024-10-28 18:09:09.477077] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:18:53.139 [2024-10-28 18:09:09.477098] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.139 [2024-10-28 18:09:09.477117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:18:53.139 [2024-10-28 18:09:09.477129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:18:53.139 [2024-10-28 18:09:09.477143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:18:53.139 [2024-10-28 18:09:09.477155] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:18:53.139 [2024-10-28 18:09:09.477171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:18:53.139 [2024-10-28 18:09:09.477183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.139 [2024-10-28 18:09:09.477196] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:18:53.139 [2024-10-28 18:09:09.477208] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:18:53.139 [2024-10-28 18:09:09.477221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:18:53.139 [2024-10-28 18:09:09.477233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:18:53.139 [2024-10-28 18:09:09.477247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:18:53.139 [2024-10-28 18:09:09.477259] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.139 [2024-10-28 18:09:09.477272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:18:53.139 [2024-10-28 18:09:09.477284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:18:53.139 [2024-10-28 18:09:09.477298] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.139 [2024-10-28 18:09:09.477309] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:18:53.139 [2024-10-28 18:09:09.477324] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:18:53.139 [2024-10-28 18:09:09.477336] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:18:53.139 [2024-10-28 18:09:09.477355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:18:53.139 [2024-10-28 18:09:09.477377] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:18:53.139 [2024-10-28 18:09:09.477401] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:18:53.139 [2024-10-28 18:09:09.477414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:18:53.139 [2024-10-28 18:09:09.477444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:18:53.139 [2024-10-28 18:09:09.477458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:18:53.139 [2024-10-28 18:09:09.477472] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:18:53.139 [2024-10-28 18:09:09.477490] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:18:53.139 [2024-10-28 18:09:09.477508] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.139 [2024-10-28 18:09:09.477523] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:18:53.139 [2024-10-28 18:09:09.477538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:18:53.139 [2024-10-28 18:09:09.477551] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:18:53.139 [2024-10-28 18:09:09.477565] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:18:53.139 [2024-10-28 18:09:09.477578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:18:53.139 [2024-10-28 18:09:09.477594] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:18:53.139 [2024-10-28 18:09:09.477607] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:18:53.139 [2024-10-28 18:09:09.477628] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:18:53.139 [2024-10-28 18:09:09.477652] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:18:53.139 [2024-10-28 18:09:09.477681] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:18:53.139 [2024-10-28 18:09:09.477699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:18:53.139 [2024-10-28 18:09:09.477715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:18:53.139 [2024-10-28 18:09:09.477727] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:18:53.139 [2024-10-28 18:09:09.477742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:18:53.139 [2024-10-28 18:09:09.477755] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:18:53.139 [2024-10-28 18:09:09.477780] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:18:53.139 [2024-10-28 18:09:09.477793] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:18:53.139 [2024-10-28 18:09:09.477808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:18:53.139 [2024-10-28 18:09:09.477821] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:18:53.139 [2024-10-28 18:09:09.477850] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:18:53.139 [2024-10-28 18:09:09.477874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:53.139 [2024-10-28 18:09:09.477891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:18:53.139 [2024-10-28 18:09:09.477904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.384 ms 00:18:53.139 [2024-10-28 18:09:09.477919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:53.139 [2024-10-28 18:09:09.478012] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:18:53.139 [2024-10-28 18:09:09.478044] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:18:55.037 [2024-10-28 18:09:11.444010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.037 [2024-10-28 18:09:11.444083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:18:55.037 [2024-10-28 18:09:11.444107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1966.010 ms 00:18:55.037 [2024-10-28 18:09:11.444123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.037 [2024-10-28 18:09:11.476473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.037 [2024-10-28 18:09:11.476541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:55.037 [2024-10-28 18:09:11.476563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.003 ms 00:18:55.037 [2024-10-28 18:09:11.476579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.037 [2024-10-28 18:09:11.476768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.037 [2024-10-28 18:09:11.476800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:18:55.037 [2024-10-28 18:09:11.476817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:18:55.037 [2024-10-28 18:09:11.476859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.296 [2024-10-28 18:09:11.527263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.296 [2024-10-28 18:09:11.527579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:55.296 [2024-10-28 18:09:11.527621] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.307 ms 00:18:55.296 [2024-10-28 18:09:11.527645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.296 [2024-10-28 18:09:11.527892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.296 [2024-10-28 18:09:11.527935] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:55.296 [2024-10-28 18:09:11.527957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:55.296 [2024-10-28 18:09:11.527975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.296 [2024-10-28 18:09:11.528370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.296 [2024-10-28 18:09:11.528410] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:55.296 [2024-10-28 18:09:11.528430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:18:55.296 [2024-10-28 18:09:11.528448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.296 [2024-10-28 18:09:11.528661] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.296 [2024-10-28 18:09:11.528692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:55.296 [2024-10-28 18:09:11.528712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.145 ms 00:18:55.296 [2024-10-28 18:09:11.528732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.296 [2024-10-28 18:09:11.548200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.296 [2024-10-28 18:09:11.548487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:55.296 [2024-10-28 18:09:11.548626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.397 ms 00:18:55.296 [2024-10-28 18:09:11.548772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.296 [2024-10-28 18:09:11.562511] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:18:55.296 [2024-10-28 18:09:11.576916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.296 [2024-10-28 18:09:11.577233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:18:55.296 [2024-10-28 18:09:11.577386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.880 ms 00:18:55.296 [2024-10-28 18:09:11.577412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.296 [2024-10-28 18:09:11.640578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.296 [2024-10-28 18:09:11.640710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:18:55.296 [2024-10-28 18:09:11.640782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.008 ms 00:18:55.296 [2024-10-28 18:09:11.640853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.296 [2024-10-28 18:09:11.641294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.296 [2024-10-28 18:09:11.641451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:18:55.296 [2024-10-28 18:09:11.641489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.168 ms 00:18:55.296 [2024-10-28 18:09:11.641505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.296 [2024-10-28 18:09:11.673334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.296 [2024-10-28 18:09:11.673550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:18:55.296 [2024-10-28 18:09:11.673697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.774 ms 00:18:55.296 [2024-10-28 18:09:11.673724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.296 [2024-10-28 18:09:11.704849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.296 [2024-10-28 18:09:11.705049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:18:55.296 [2024-10-28 18:09:11.705087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.843 ms 00:18:55.296 [2024-10-28 18:09:11.705101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.296 [2024-10-28 18:09:11.706085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.296 [2024-10-28 18:09:11.706156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:18:55.296 [2024-10-28 18:09:11.706391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.871 ms 00:18:55.296 [2024-10-28 18:09:11.706456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.577 [2024-10-28 18:09:11.790114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.577 [2024-10-28 18:09:11.790368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:18:55.577 [2024-10-28 18:09:11.790554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.530 ms 00:18:55.577 [2024-10-28 18:09:11.790692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.577 [2024-10-28 18:09:11.823683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.577 [2024-10-28 18:09:11.823743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:18:55.577 [2024-10-28 18:09:11.823767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.676 ms 00:18:55.577 [2024-10-28 18:09:11.823780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.577 [2024-10-28 18:09:11.856382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.577 [2024-10-28 18:09:11.856636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:18:55.577 [2024-10-28 18:09:11.856674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.452 ms 00:18:55.577 [2024-10-28 18:09:11.856688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.577 [2024-10-28 18:09:11.889426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.577 [2024-10-28 18:09:11.889508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:18:55.577 [2024-10-28 18:09:11.889533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.607 ms 00:18:55.577 [2024-10-28 18:09:11.889565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.577 [2024-10-28 18:09:11.889704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.577 [2024-10-28 18:09:11.889730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:18:55.577 [2024-10-28 18:09:11.889752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:18:55.577 [2024-10-28 18:09:11.889765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.577 [2024-10-28 18:09:11.890071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:55.577 [2024-10-28 18:09:11.890144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:18:55.577 [2024-10-28 18:09:11.890205] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:18:55.577 [2024-10-28 18:09:11.890351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:55.577 [2024-10-28 18:09:11.891511] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:18:55.577 [2024-10-28 18:09:11.895928] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2433.307 ms, result 0 00:18:55.577 [2024-10-28 18:09:11.896867] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:18:55.577 { 00:18:55.577 "name": "ftl0", 00:18:55.577 "uuid": "0cdd57b4-e896-4e97-a9c9-3c575802f024" 00:18:55.577 } 00:18:55.577 18:09:11 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:18:55.577 18:09:11 ftl.ftl_trim -- common/autotest_common.sh@901 -- # local bdev_name=ftl0 00:18:55.577 18:09:11 ftl.ftl_trim -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:18:55.577 18:09:11 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local i 00:18:55.577 18:09:11 ftl.ftl_trim -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:18:55.577 18:09:11 ftl.ftl_trim -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:18:55.577 18:09:11 ftl.ftl_trim -- common/autotest_common.sh@906 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:55.836 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:18:56.403 [ 00:18:56.403 { 00:18:56.403 "name": "ftl0", 00:18:56.403 "aliases": [ 00:18:56.403 "0cdd57b4-e896-4e97-a9c9-3c575802f024" 00:18:56.403 ], 00:18:56.403 "product_name": "FTL disk", 00:18:56.403 "block_size": 4096, 00:18:56.403 "num_blocks": 23592960, 00:18:56.403 "uuid": "0cdd57b4-e896-4e97-a9c9-3c575802f024", 00:18:56.403 "assigned_rate_limits": { 00:18:56.403 "rw_ios_per_sec": 0, 00:18:56.403 "rw_mbytes_per_sec": 0, 00:18:56.403 "r_mbytes_per_sec": 0, 00:18:56.403 "w_mbytes_per_sec": 0 00:18:56.403 }, 00:18:56.403 "claimed": false, 00:18:56.403 "zoned": false, 00:18:56.403 "supported_io_types": { 00:18:56.403 "read": true, 00:18:56.403 "write": true, 00:18:56.403 "unmap": true, 00:18:56.403 "flush": true, 00:18:56.403 "reset": false, 00:18:56.403 "nvme_admin": false, 00:18:56.403 "nvme_io": false, 00:18:56.403 "nvme_io_md": false, 00:18:56.403 "write_zeroes": true, 00:18:56.403 "zcopy": false, 00:18:56.403 "get_zone_info": false, 00:18:56.403 "zone_management": false, 00:18:56.403 "zone_append": false, 00:18:56.403 "compare": false, 00:18:56.403 "compare_and_write": false, 00:18:56.403 "abort": false, 00:18:56.403 "seek_hole": false, 00:18:56.403 "seek_data": false, 00:18:56.403 "copy": false, 00:18:56.403 "nvme_iov_md": false 00:18:56.403 }, 00:18:56.403 "driver_specific": { 00:18:56.403 "ftl": { 00:18:56.403 "base_bdev": "78828639-ef3b-4eca-b66b-f15e12c30e72", 00:18:56.403 "cache": "nvc0n1p0" 00:18:56.403 } 00:18:56.403 } 00:18:56.403 } 00:18:56.403 ] 00:18:56.403 18:09:12 ftl.ftl_trim -- common/autotest_common.sh@909 -- # return 0 00:18:56.403 18:09:12 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:18:56.403 18:09:12 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:18:56.661 18:09:12 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:18:56.661 18:09:12 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:18:56.920 18:09:13 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:18:56.920 { 00:18:56.920 "name": "ftl0", 00:18:56.920 "aliases": [ 00:18:56.920 "0cdd57b4-e896-4e97-a9c9-3c575802f024" 00:18:56.920 ], 00:18:56.920 "product_name": "FTL disk", 00:18:56.920 "block_size": 4096, 00:18:56.920 "num_blocks": 23592960, 00:18:56.920 "uuid": "0cdd57b4-e896-4e97-a9c9-3c575802f024", 00:18:56.920 "assigned_rate_limits": { 00:18:56.920 "rw_ios_per_sec": 0, 00:18:56.920 "rw_mbytes_per_sec": 0, 00:18:56.920 "r_mbytes_per_sec": 0, 00:18:56.920 "w_mbytes_per_sec": 0 00:18:56.920 }, 00:18:56.920 "claimed": false, 00:18:56.920 "zoned": false, 00:18:56.920 "supported_io_types": { 00:18:56.920 "read": true, 00:18:56.920 "write": true, 00:18:56.920 "unmap": true, 00:18:56.920 "flush": true, 00:18:56.920 "reset": false, 00:18:56.920 "nvme_admin": false, 00:18:56.920 "nvme_io": false, 00:18:56.920 "nvme_io_md": false, 00:18:56.920 "write_zeroes": true, 00:18:56.920 "zcopy": false, 00:18:56.920 "get_zone_info": false, 00:18:56.920 "zone_management": false, 00:18:56.920 "zone_append": false, 00:18:56.920 "compare": false, 00:18:56.920 "compare_and_write": false, 00:18:56.920 "abort": false, 00:18:56.920 "seek_hole": false, 00:18:56.920 "seek_data": false, 00:18:56.920 "copy": false, 00:18:56.920 "nvme_iov_md": false 00:18:56.920 }, 00:18:56.920 "driver_specific": { 00:18:56.920 "ftl": { 00:18:56.920 "base_bdev": "78828639-ef3b-4eca-b66b-f15e12c30e72", 00:18:56.920 "cache": "nvc0n1p0" 00:18:56.920 } 00:18:56.920 } 00:18:56.920 } 00:18:56.920 ]' 00:18:56.920 18:09:13 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:18:56.920 18:09:13 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:18:56.920 18:09:13 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:18:57.179 [2024-10-28 18:09:13.521993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-10-28 18:09:13.522062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:18:57.179 [2024-10-28 18:09:13.522089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:18:57.179 [2024-10-28 18:09:13.522109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-10-28 18:09:13.522157] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:18:57.179 [2024-10-28 18:09:13.525525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-10-28 18:09:13.525708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:18:57.179 [2024-10-28 18:09:13.525748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.337 ms 00:18:57.179 [2024-10-28 18:09:13.525763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-10-28 18:09:13.526391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-10-28 18:09:13.526419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:18:57.179 [2024-10-28 18:09:13.526438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:18:57.179 [2024-10-28 18:09:13.526450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-10-28 18:09:13.530285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-10-28 18:09:13.530439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:18:57.179 [2024-10-28 18:09:13.530574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.793 ms 00:18:57.179 [2024-10-28 18:09:13.530637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-10-28 18:09:13.538396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-10-28 18:09:13.538606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:18:57.179 [2024-10-28 18:09:13.538744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.569 ms 00:18:57.179 [2024-10-28 18:09:13.538807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-10-28 18:09:13.571418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-10-28 18:09:13.571760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:18:57.179 [2024-10-28 18:09:13.571931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.255 ms 00:18:57.179 [2024-10-28 18:09:13.572075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-10-28 18:09:13.591041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-10-28 18:09:13.591311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:18:57.179 [2024-10-28 18:09:13.591466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.763 ms 00:18:57.179 [2024-10-28 18:09:13.591537] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-10-28 18:09:13.592009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-10-28 18:09:13.592162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:18:57.179 [2024-10-28 18:09:13.592306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:18:57.179 [2024-10-28 18:09:13.592463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.179 [2024-10-28 18:09:13.623920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.179 [2024-10-28 18:09:13.624158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:18:57.179 [2024-10-28 18:09:13.624292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.356 ms 00:18:57.179 [2024-10-28 18:09:13.624356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.437 [2024-10-28 18:09:13.656131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.437 [2024-10-28 18:09:13.656366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:18:57.437 [2024-10-28 18:09:13.656508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.547 ms 00:18:57.437 [2024-10-28 18:09:13.656572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.437 [2024-10-28 18:09:13.687759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.438 [2024-10-28 18:09:13.687987] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:18:57.438 [2024-10-28 18:09:13.688171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.031 ms 00:18:57.438 [2024-10-28 18:09:13.688197] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.438 [2024-10-28 18:09:13.719430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.438 [2024-10-28 18:09:13.719486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:18:57.438 [2024-10-28 18:09:13.719524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.036 ms 00:18:57.438 [2024-10-28 18:09:13.719538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.438 [2024-10-28 18:09:13.719658] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:18:57.438 [2024-10-28 18:09:13.719688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.719999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:18:57.438 [2024-10-28 18:09:13.720804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.720994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:18:57.439 [2024-10-28 18:09:13.721255] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:18:57.439 [2024-10-28 18:09:13.721272] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cdd57b4-e896-4e97-a9c9-3c575802f024 00:18:57.439 [2024-10-28 18:09:13.721286] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:18:57.439 [2024-10-28 18:09:13.721300] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:18:57.439 [2024-10-28 18:09:13.721312] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:18:57.439 [2024-10-28 18:09:13.721326] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:18:57.439 [2024-10-28 18:09:13.721341] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:18:57.439 [2024-10-28 18:09:13.721356] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:18:57.439 [2024-10-28 18:09:13.721368] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:18:57.439 [2024-10-28 18:09:13.721381] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:18:57.439 [2024-10-28 18:09:13.721392] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:18:57.439 [2024-10-28 18:09:13.721406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.439 [2024-10-28 18:09:13.721419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:18:57.439 [2024-10-28 18:09:13.721434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.754 ms 00:18:57.439 [2024-10-28 18:09:13.721460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.439 [2024-10-28 18:09:13.738323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.439 [2024-10-28 18:09:13.738375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:18:57.439 [2024-10-28 18:09:13.738405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.815 ms 00:18:57.439 [2024-10-28 18:09:13.738419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.439 [2024-10-28 18:09:13.738950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:18:57.439 [2024-10-28 18:09:13.738977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:18:57.439 [2024-10-28 18:09:13.738996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.426 ms 00:18:57.439 [2024-10-28 18:09:13.739009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.439 [2024-10-28 18:09:13.797483] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.439 [2024-10-28 18:09:13.797736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:18:57.439 [2024-10-28 18:09:13.797777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.439 [2024-10-28 18:09:13.797792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.439 [2024-10-28 18:09:13.798001] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.439 [2024-10-28 18:09:13.798024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:18:57.439 [2024-10-28 18:09:13.798041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.439 [2024-10-28 18:09:13.798053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.439 [2024-10-28 18:09:13.798150] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.439 [2024-10-28 18:09:13.798172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:18:57.439 [2024-10-28 18:09:13.798195] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.439 [2024-10-28 18:09:13.798208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.439 [2024-10-28 18:09:13.798254] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.439 [2024-10-28 18:09:13.798271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:18:57.439 [2024-10-28 18:09:13.798286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.439 [2024-10-28 18:09:13.798298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.439 [2024-10-28 18:09:13.909043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.439 [2024-10-28 18:09:13.909112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:18:57.439 [2024-10-28 18:09:13.909135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.439 [2024-10-28 18:09:13.909149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.751 [2024-10-28 18:09:13.994335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.751 [2024-10-28 18:09:13.994404] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:18:57.751 [2024-10-28 18:09:13.994428] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.751 [2024-10-28 18:09:13.994442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.751 [2024-10-28 18:09:13.994573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.751 [2024-10-28 18:09:13.994594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:18:57.751 [2024-10-28 18:09:13.994637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.751 [2024-10-28 18:09:13.994654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.751 [2024-10-28 18:09:13.994725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.751 [2024-10-28 18:09:13.994742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:18:57.751 [2024-10-28 18:09:13.994757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.751 [2024-10-28 18:09:13.994770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.751 [2024-10-28 18:09:13.994955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.751 [2024-10-28 18:09:13.994978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:18:57.751 [2024-10-28 18:09:13.994995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.751 [2024-10-28 18:09:13.995009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.751 [2024-10-28 18:09:13.995094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.751 [2024-10-28 18:09:13.995115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:18:57.751 [2024-10-28 18:09:13.995131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.751 [2024-10-28 18:09:13.995143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.751 [2024-10-28 18:09:13.995212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.751 [2024-10-28 18:09:13.995230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:18:57.751 [2024-10-28 18:09:13.995248] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.751 [2024-10-28 18:09:13.995261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.751 [2024-10-28 18:09:13.995340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:18:57.751 [2024-10-28 18:09:13.995366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:18:57.751 [2024-10-28 18:09:13.995383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:18:57.751 [2024-10-28 18:09:13.995396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:18:57.751 [2024-10-28 18:09:13.995633] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 473.627 ms, result 0 00:18:57.751 true 00:18:57.751 18:09:14 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 75282 00:18:57.751 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75282 ']' 00:18:57.751 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75282 00:18:57.751 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:18:57.751 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:18:57.752 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75282 00:18:57.752 killing process with pid 75282 00:18:57.752 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:18:57.752 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:18:57.752 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75282' 00:18:57.752 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75282 00:18:57.752 18:09:14 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75282 00:19:03.055 18:09:18 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:03.313 65536+0 records in 00:19:03.313 65536+0 records out 00:19:03.313 268435456 bytes (268 MB, 256 MiB) copied, 1.19118 s, 225 MB/s 00:19:03.313 18:09:19 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:03.572 [2024-10-28 18:09:19.792519] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:19:03.572 [2024-10-28 18:09:19.792903] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75481 ] 00:19:03.572 [2024-10-28 18:09:19.969983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.830 [2024-10-28 18:09:20.097386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.088 [2024-10-28 18:09:20.408059] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.088 [2024-10-28 18:09:20.408191] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:04.346 [2024-10-28 18:09:20.569848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.346 [2024-10-28 18:09:20.569957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:04.346 [2024-10-28 18:09:20.569995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:04.346 [2024-10-28 18:09:20.570006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.346 [2024-10-28 18:09:20.573157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.346 [2024-10-28 18:09:20.573201] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:04.346 [2024-10-28 18:09:20.573234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.105 ms 00:19:04.346 [2024-10-28 18:09:20.573260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.346 [2024-10-28 18:09:20.573397] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:04.346 [2024-10-28 18:09:20.574430] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:04.346 [2024-10-28 18:09:20.574474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.346 [2024-10-28 18:09:20.574520] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:04.346 [2024-10-28 18:09:20.574548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.088 ms 00:19:04.346 [2024-10-28 18:09:20.574559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.346 [2024-10-28 18:09:20.575840] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:04.346 [2024-10-28 18:09:20.590871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.346 [2024-10-28 18:09:20.590919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:04.346 [2024-10-28 18:09:20.590968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.033 ms 00:19:04.346 [2024-10-28 18:09:20.590978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.346 [2024-10-28 18:09:20.591096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.346 [2024-10-28 18:09:20.591117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:04.346 [2024-10-28 18:09:20.591129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:19:04.346 [2024-10-28 18:09:20.591140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.346 [2024-10-28 18:09:20.595780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.347 [2024-10-28 18:09:20.595855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:04.347 [2024-10-28 18:09:20.595873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.586 ms 00:19:04.347 [2024-10-28 18:09:20.595884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.347 [2024-10-28 18:09:20.596013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.347 [2024-10-28 18:09:20.596035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:04.347 [2024-10-28 18:09:20.596047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:04.347 [2024-10-28 18:09:20.596057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.347 [2024-10-28 18:09:20.596096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.347 [2024-10-28 18:09:20.596117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:04.347 [2024-10-28 18:09:20.596129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:04.347 [2024-10-28 18:09:20.596139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.347 [2024-10-28 18:09:20.596170] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:04.347 [2024-10-28 18:09:20.600333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.347 [2024-10-28 18:09:20.600369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:04.347 [2024-10-28 18:09:20.600401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.172 ms 00:19:04.347 [2024-10-28 18:09:20.600411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.347 [2024-10-28 18:09:20.600476] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.347 [2024-10-28 18:09:20.600494] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:04.347 [2024-10-28 18:09:20.600506] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:04.347 [2024-10-28 18:09:20.600516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.347 [2024-10-28 18:09:20.600547] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:04.347 [2024-10-28 18:09:20.600578] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:04.347 [2024-10-28 18:09:20.600618] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:04.347 [2024-10-28 18:09:20.600637] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:04.347 [2024-10-28 18:09:20.600735] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:04.347 [2024-10-28 18:09:20.600749] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:04.347 [2024-10-28 18:09:20.600763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:04.347 [2024-10-28 18:09:20.600777] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:04.347 [2024-10-28 18:09:20.600792] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:04.347 [2024-10-28 18:09:20.600803] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:04.347 [2024-10-28 18:09:20.600813] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:04.347 [2024-10-28 18:09:20.600822] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:04.347 [2024-10-28 18:09:20.600832] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:04.347 [2024-10-28 18:09:20.600843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.347 [2024-10-28 18:09:20.600907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:04.347 [2024-10-28 18:09:20.600921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.299 ms 00:19:04.347 [2024-10-28 18:09:20.600932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.347 [2024-10-28 18:09:20.601030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.347 [2024-10-28 18:09:20.601047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:04.347 [2024-10-28 18:09:20.601065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:04.347 [2024-10-28 18:09:20.601075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.347 [2024-10-28 18:09:20.601241] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:04.347 [2024-10-28 18:09:20.601261] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:04.347 [2024-10-28 18:09:20.601275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.347 [2024-10-28 18:09:20.601287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601299] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:04.347 [2024-10-28 18:09:20.601309] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601320] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:04.347 [2024-10-28 18:09:20.601331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:04.347 [2024-10-28 18:09:20.601342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.347 [2024-10-28 18:09:20.601368] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:04.347 [2024-10-28 18:09:20.601379] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:04.347 [2024-10-28 18:09:20.601390] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:04.347 [2024-10-28 18:09:20.601415] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:04.347 [2024-10-28 18:09:20.601427] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:04.347 [2024-10-28 18:09:20.601438] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601449] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:04.347 [2024-10-28 18:09:20.601460] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:04.347 [2024-10-28 18:09:20.601483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601495] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:04.347 [2024-10-28 18:09:20.601506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601516] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.347 [2024-10-28 18:09:20.601526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:04.347 [2024-10-28 18:09:20.601537] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.347 [2024-10-28 18:09:20.601558] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:04.347 [2024-10-28 18:09:20.601568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.347 [2024-10-28 18:09:20.601589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:04.347 [2024-10-28 18:09:20.601599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:04.347 [2024-10-28 18:09:20.601620] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:04.347 [2024-10-28 18:09:20.601630] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601640] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.347 [2024-10-28 18:09:20.601651] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:04.347 [2024-10-28 18:09:20.601662] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:04.347 [2024-10-28 18:09:20.601672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:04.347 [2024-10-28 18:09:20.601682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:04.347 [2024-10-28 18:09:20.601693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:04.347 [2024-10-28 18:09:20.601703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:04.347 [2024-10-28 18:09:20.601727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:04.347 [2024-10-28 18:09:20.601738] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601747] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:04.347 [2024-10-28 18:09:20.601759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:04.347 [2024-10-28 18:09:20.601771] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:04.347 [2024-10-28 18:09:20.601787] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:04.347 [2024-10-28 18:09:20.601799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:04.347 [2024-10-28 18:09:20.601811] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:04.347 [2024-10-28 18:09:20.601821] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:04.347 [2024-10-28 18:09:20.601832] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:04.347 [2024-10-28 18:09:20.601869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:04.347 [2024-10-28 18:09:20.601881] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:04.347 [2024-10-28 18:09:20.601893] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:04.347 [2024-10-28 18:09:20.601907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.347 [2024-10-28 18:09:20.601919] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:04.347 [2024-10-28 18:09:20.601931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:04.347 [2024-10-28 18:09:20.601942] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:04.347 [2024-10-28 18:09:20.601953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:04.347 [2024-10-28 18:09:20.601964] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:04.347 [2024-10-28 18:09:20.601975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:04.347 [2024-10-28 18:09:20.601986] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:04.347 [2024-10-28 18:09:20.601998] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:04.347 [2024-10-28 18:09:20.602009] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:04.348 [2024-10-28 18:09:20.602020] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:04.348 [2024-10-28 18:09:20.602031] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:04.348 [2024-10-28 18:09:20.602042] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:04.348 [2024-10-28 18:09:20.602054] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:04.348 [2024-10-28 18:09:20.602066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:04.348 [2024-10-28 18:09:20.602077] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:04.348 [2024-10-28 18:09:20.602089] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:04.348 [2024-10-28 18:09:20.602101] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:04.348 [2024-10-28 18:09:20.602113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:04.348 [2024-10-28 18:09:20.602127] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:04.348 [2024-10-28 18:09:20.602138] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:04.348 [2024-10-28 18:09:20.602151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.602162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:04.348 [2024-10-28 18:09:20.602179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.011 ms 00:19:04.348 [2024-10-28 18:09:20.602191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.633179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.633547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:04.348 [2024-10-28 18:09:20.633580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.915 ms 00:19:04.348 [2024-10-28 18:09:20.633593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.633793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.633822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:04.348 [2024-10-28 18:09:20.633860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:04.348 [2024-10-28 18:09:20.633876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.681879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.681950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:04.348 [2024-10-28 18:09:20.681987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.937 ms 00:19:04.348 [2024-10-28 18:09:20.682003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.682166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.682186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:04.348 [2024-10-28 18:09:20.682198] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:04.348 [2024-10-28 18:09:20.682209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.682525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.682542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:04.348 [2024-10-28 18:09:20.682554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.287 ms 00:19:04.348 [2024-10-28 18:09:20.682569] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.682712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.682731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:04.348 [2024-10-28 18:09:20.682742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:19:04.348 [2024-10-28 18:09:20.682752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.698616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.698668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:04.348 [2024-10-28 18:09:20.698702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.835 ms 00:19:04.348 [2024-10-28 18:09:20.698713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.714006] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:04.348 [2024-10-28 18:09:20.714068] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:04.348 [2024-10-28 18:09:20.714103] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.714113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:04.348 [2024-10-28 18:09:20.714125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.162 ms 00:19:04.348 [2024-10-28 18:09:20.714136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.740938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.741151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:04.348 [2024-10-28 18:09:20.741211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.707 ms 00:19:04.348 [2024-10-28 18:09:20.741224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.755758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.756003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:04.348 [2024-10-28 18:09:20.756033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.427 ms 00:19:04.348 [2024-10-28 18:09:20.756047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.771354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.771550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:04.348 [2024-10-28 18:09:20.771580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.202 ms 00:19:04.348 [2024-10-28 18:09:20.771591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.348 [2024-10-28 18:09:20.772500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.348 [2024-10-28 18:09:20.772538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:04.348 [2024-10-28 18:09:20.772554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:19:04.348 [2024-10-28 18:09:20.772565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.606 [2024-10-28 18:09:20.840461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.606 [2024-10-28 18:09:20.840534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:04.606 [2024-10-28 18:09:20.840570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 67.860 ms 00:19:04.606 [2024-10-28 18:09:20.840581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.606 [2024-10-28 18:09:20.852448] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:04.606 [2024-10-28 18:09:20.865851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.606 [2024-10-28 18:09:20.865960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:04.606 [2024-10-28 18:09:20.865998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.110 ms 00:19:04.606 [2024-10-28 18:09:20.866009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.606 [2024-10-28 18:09:20.866157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.606 [2024-10-28 18:09:20.866181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:04.606 [2024-10-28 18:09:20.866194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:04.606 [2024-10-28 18:09:20.866205] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.606 [2024-10-28 18:09:20.866271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.606 [2024-10-28 18:09:20.866303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:04.606 [2024-10-28 18:09:20.866316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:04.606 [2024-10-28 18:09:20.866326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.606 [2024-10-28 18:09:20.866364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.606 [2024-10-28 18:09:20.866380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:04.606 [2024-10-28 18:09:20.866395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:04.606 [2024-10-28 18:09:20.866405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.606 [2024-10-28 18:09:20.866554] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:04.606 [2024-10-28 18:09:20.866573] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.606 [2024-10-28 18:09:20.866583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:04.606 [2024-10-28 18:09:20.866594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:04.606 [2024-10-28 18:09:20.866605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.606 [2024-10-28 18:09:20.898259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.606 [2024-10-28 18:09:20.898346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:04.606 [2024-10-28 18:09:20.898368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.625 ms 00:19:04.606 [2024-10-28 18:09:20.898381] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.606 [2024-10-28 18:09:20.898609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:04.606 [2024-10-28 18:09:20.898631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:04.606 [2024-10-28 18:09:20.898645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:19:04.606 [2024-10-28 18:09:20.898657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:04.606 [2024-10-28 18:09:20.899923] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:04.606 [2024-10-28 18:09:20.904327] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 329.721 ms, result 0 00:19:04.606 [2024-10-28 18:09:20.905167] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:04.606 [2024-10-28 18:09:20.922217] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:05.540  [2024-10-28T18:09:22.952Z] Copying: 23/256 [MB] (23 MBps) [2024-10-28T18:09:24.339Z] Copying: 47/256 [MB] (23 MBps) [2024-10-28T18:09:25.276Z] Copying: 70/256 [MB] (23 MBps) [2024-10-28T18:09:26.213Z] Copying: 94/256 [MB] (23 MBps) [2024-10-28T18:09:27.148Z] Copying: 116/256 [MB] (22 MBps) [2024-10-28T18:09:28.084Z] Copying: 139/256 [MB] (22 MBps) [2024-10-28T18:09:29.020Z] Copying: 161/256 [MB] (22 MBps) [2024-10-28T18:09:29.955Z] Copying: 183/256 [MB] (22 MBps) [2024-10-28T18:09:31.357Z] Copying: 206/256 [MB] (22 MBps) [2024-10-28T18:09:32.294Z] Copying: 228/256 [MB] (22 MBps) [2024-10-28T18:09:32.294Z] Copying: 253/256 [MB] (24 MBps) [2024-10-28T18:09:32.294Z] Copying: 256/256 [MB] (average 23 MBps)[2024-10-28 18:09:32.036694] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:15.816 [2024-10-28 18:09:32.049801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.050032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:15.816 [2024-10-28 18:09:32.050164] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:15.816 [2024-10-28 18:09:32.050322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.050410] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:15.816 [2024-10-28 18:09:32.053888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.054079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:15.816 [2024-10-28 18:09:32.054107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.282 ms 00:19:15.816 [2024-10-28 18:09:32.054119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.055826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.056015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:15.816 [2024-10-28 18:09:32.056101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.662 ms 00:19:15.816 [2024-10-28 18:09:32.056194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.063535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.063734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:15.816 [2024-10-28 18:09:32.063918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.202 ms 00:19:15.816 [2024-10-28 18:09:32.063972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.071688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.071866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:15.816 [2024-10-28 18:09:32.072022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.499 ms 00:19:15.816 [2024-10-28 18:09:32.072095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.105104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.105288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:15.816 [2024-10-28 18:09:32.105411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.861 ms 00:19:15.816 [2024-10-28 18:09:32.105544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.124481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.124670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:15.816 [2024-10-28 18:09:32.124832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.771 ms 00:19:15.816 [2024-10-28 18:09:32.124912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.125142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.125216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:15.816 [2024-10-28 18:09:32.125356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:15.816 [2024-10-28 18:09:32.125380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.158058] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.158246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:15.816 [2024-10-28 18:09:32.158291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.646 ms 00:19:15.816 [2024-10-28 18:09:32.158304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.188765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.188811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:15.816 [2024-10-28 18:09:32.188844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.389 ms 00:19:15.816 [2024-10-28 18:09:32.188900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.219178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.219243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:15.816 [2024-10-28 18:09:32.219276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.194 ms 00:19:15.816 [2024-10-28 18:09:32.219287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.251021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.816 [2024-10-28 18:09:32.251073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:15.816 [2024-10-28 18:09:32.251123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.627 ms 00:19:15.816 [2024-10-28 18:09:32.251134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.816 [2024-10-28 18:09:32.251207] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:15.816 [2024-10-28 18:09:32.251241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:15.816 [2024-10-28 18:09:32.251402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251909] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.251998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:15.817 [2024-10-28 18:09:32.252477] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:15.817 [2024-10-28 18:09:32.252502] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cdd57b4-e896-4e97-a9c9-3c575802f024 00:19:15.817 [2024-10-28 18:09:32.252514] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:15.817 [2024-10-28 18:09:32.252525] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:15.817 [2024-10-28 18:09:32.252535] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:15.817 [2024-10-28 18:09:32.252546] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:15.817 [2024-10-28 18:09:32.252556] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:15.817 [2024-10-28 18:09:32.252566] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:15.817 [2024-10-28 18:09:32.252577] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:15.817 [2024-10-28 18:09:32.252597] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:15.818 [2024-10-28 18:09:32.252607] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:15.818 [2024-10-28 18:09:32.252618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.818 [2024-10-28 18:09:32.252629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:15.818 [2024-10-28 18:09:32.252646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.414 ms 00:19:15.818 [2024-10-28 18:09:32.252657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.818 [2024-10-28 18:09:32.268858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.818 [2024-10-28 18:09:32.269080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:15.818 [2024-10-28 18:09:32.269111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.173 ms 00:19:15.818 [2024-10-28 18:09:32.269123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:15.818 [2024-10-28 18:09:32.269654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:15.818 [2024-10-28 18:09:32.269689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:15.818 [2024-10-28 18:09:32.269703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.475 ms 00:19:15.818 [2024-10-28 18:09:32.269714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.317055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.317146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:16.077 [2024-10-28 18:09:32.317165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.317177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.317326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.317350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:16.077 [2024-10-28 18:09:32.317363] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.317374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.317443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.317462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:16.077 [2024-10-28 18:09:32.317475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.317486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.317512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.317539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:16.077 [2024-10-28 18:09:32.317559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.317570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.424721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.424793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:16.077 [2024-10-28 18:09:32.424829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.424856] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.510875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.510946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:16.077 [2024-10-28 18:09:32.510986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.510998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.511096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.511115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:16.077 [2024-10-28 18:09:32.511128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.511139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.511176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.511190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:16.077 [2024-10-28 18:09:32.511203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.511227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.511359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.511385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:16.077 [2024-10-28 18:09:32.511399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.511411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.511464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.511491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:16.077 [2024-10-28 18:09:32.511505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.511516] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.511581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.511599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:16.077 [2024-10-28 18:09:32.511611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.511622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.511676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:16.077 [2024-10-28 18:09:32.511693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:16.077 [2024-10-28 18:09:32.511706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:16.077 [2024-10-28 18:09:32.511730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:16.077 [2024-10-28 18:09:32.511940] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 462.129 ms, result 0 00:19:17.451 00:19:17.451 00:19:17.451 18:09:33 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=75623 00:19:17.451 18:09:33 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:17.451 18:09:33 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 75623 00:19:17.451 18:09:33 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75623 ']' 00:19:17.451 18:09:33 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:17.451 18:09:33 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:17.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:17.451 18:09:33 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:17.451 18:09:33 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:17.451 18:09:33 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:17.451 [2024-10-28 18:09:33.707414] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:19:17.451 [2024-10-28 18:09:33.707808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75623 ] 00:19:17.451 [2024-10-28 18:09:33.884581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.710 [2024-10-28 18:09:33.995496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.645 18:09:34 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:18.645 18:09:34 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:19:18.645 18:09:34 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:18.645 [2024-10-28 18:09:35.094626] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:18.645 [2024-10-28 18:09:35.094952] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:18.920 [2024-10-28 18:09:35.276324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.276390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:18.920 [2024-10-28 18:09:35.276430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:18.920 [2024-10-28 18:09:35.276443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.279897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.279941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:18.920 [2024-10-28 18:09:35.279978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.426 ms 00:19:18.920 [2024-10-28 18:09:35.279990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.280169] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:18.920 [2024-10-28 18:09:35.281135] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:18.920 [2024-10-28 18:09:35.281197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.281212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:18.920 [2024-10-28 18:09:35.281227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.026 ms 00:19:18.920 [2024-10-28 18:09:35.281239] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.282630] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:18.920 [2024-10-28 18:09:35.299602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.299660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:18.920 [2024-10-28 18:09:35.299682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.999 ms 00:19:18.920 [2024-10-28 18:09:35.299701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.299826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.299881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:18.920 [2024-10-28 18:09:35.299897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:19:18.920 [2024-10-28 18:09:35.299912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.304474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.304755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:18.920 [2024-10-28 18:09:35.304786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.493 ms 00:19:18.920 [2024-10-28 18:09:35.304801] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.305017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.305050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:18.920 [2024-10-28 18:09:35.305066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:19:18.920 [2024-10-28 18:09:35.305083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.305131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.305155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:18.920 [2024-10-28 18:09:35.305169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:18.920 [2024-10-28 18:09:35.305189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.305270] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:18.920 [2024-10-28 18:09:35.309436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.309474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:18.920 [2024-10-28 18:09:35.309512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.186 ms 00:19:18.920 [2024-10-28 18:09:35.309524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.309629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.309650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:18.920 [2024-10-28 18:09:35.309669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:18.920 [2024-10-28 18:09:35.309688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.309726] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:18.920 [2024-10-28 18:09:35.309758] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:18.920 [2024-10-28 18:09:35.309821] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:18.920 [2024-10-28 18:09:35.309902] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:18.920 [2024-10-28 18:09:35.310038] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:18.920 [2024-10-28 18:09:35.310057] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:18.920 [2024-10-28 18:09:35.310081] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:18.920 [2024-10-28 18:09:35.310103] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:18.920 [2024-10-28 18:09:35.310122] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:18.920 [2024-10-28 18:09:35.310136] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:18.920 [2024-10-28 18:09:35.310152] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:18.920 [2024-10-28 18:09:35.310165] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:18.920 [2024-10-28 18:09:35.310185] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:18.920 [2024-10-28 18:09:35.310199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.310216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:18.920 [2024-10-28 18:09:35.310230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.485 ms 00:19:18.920 [2024-10-28 18:09:35.310246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.310399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.920 [2024-10-28 18:09:35.310424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:18.920 [2024-10-28 18:09:35.310437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:19:18.920 [2024-10-28 18:09:35.310453] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.920 [2024-10-28 18:09:35.310577] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:18.920 [2024-10-28 18:09:35.310601] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:18.920 [2024-10-28 18:09:35.310615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.920 [2024-10-28 18:09:35.310632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.920 [2024-10-28 18:09:35.310645] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:18.920 [2024-10-28 18:09:35.310661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:18.920 [2024-10-28 18:09:35.310673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:18.920 [2024-10-28 18:09:35.310696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:18.920 [2024-10-28 18:09:35.310709] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:18.920 [2024-10-28 18:09:35.310725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.920 [2024-10-28 18:09:35.310737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:18.920 [2024-10-28 18:09:35.310753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:18.920 [2024-10-28 18:09:35.310764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:18.920 [2024-10-28 18:09:35.310780] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:18.920 [2024-10-28 18:09:35.310792] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:18.920 [2024-10-28 18:09:35.310807] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.920 [2024-10-28 18:09:35.310819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:18.920 [2024-10-28 18:09:35.310849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:18.920 [2024-10-28 18:09:35.310864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.920 [2024-10-28 18:09:35.310882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:18.920 [2024-10-28 18:09:35.310908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:18.920 [2024-10-28 18:09:35.310926] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.920 [2024-10-28 18:09:35.310937] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:18.920 [2024-10-28 18:09:35.310957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:18.920 [2024-10-28 18:09:35.310969] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.920 [2024-10-28 18:09:35.310984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:18.920 [2024-10-28 18:09:35.310995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:18.920 [2024-10-28 18:09:35.311019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.920 [2024-10-28 18:09:35.311031] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:18.920 [2024-10-28 18:09:35.311046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:18.920 [2024-10-28 18:09:35.311057] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:18.920 [2024-10-28 18:09:35.311073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:18.920 [2024-10-28 18:09:35.311084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:18.920 [2024-10-28 18:09:35.311101] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.920 [2024-10-28 18:09:35.311113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:18.920 [2024-10-28 18:09:35.311129] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:18.920 [2024-10-28 18:09:35.311140] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:18.920 [2024-10-28 18:09:35.311156] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:18.920 [2024-10-28 18:09:35.311168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:18.920 [2024-10-28 18:09:35.311187] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.920 [2024-10-28 18:09:35.311199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:18.920 [2024-10-28 18:09:35.311214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:18.920 [2024-10-28 18:09:35.311226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.921 [2024-10-28 18:09:35.311241] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:18.921 [2024-10-28 18:09:35.311254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:18.921 [2024-10-28 18:09:35.311278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:18.921 [2024-10-28 18:09:35.311290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:18.921 [2024-10-28 18:09:35.311307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:18.921 [2024-10-28 18:09:35.311319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:18.921 [2024-10-28 18:09:35.311335] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:18.921 [2024-10-28 18:09:35.311347] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:18.921 [2024-10-28 18:09:35.311362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:18.921 [2024-10-28 18:09:35.311374] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:18.921 [2024-10-28 18:09:35.311391] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:18.921 [2024-10-28 18:09:35.311406] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.921 [2024-10-28 18:09:35.311429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:18.921 [2024-10-28 18:09:35.311441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:18.921 [2024-10-28 18:09:35.311459] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:18.921 [2024-10-28 18:09:35.311472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:18.921 [2024-10-28 18:09:35.311488] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:18.921 [2024-10-28 18:09:35.311501] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:18.921 [2024-10-28 18:09:35.311517] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:18.921 [2024-10-28 18:09:35.311529] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:18.921 [2024-10-28 18:09:35.311542] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:18.921 [2024-10-28 18:09:35.311554] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:18.921 [2024-10-28 18:09:35.311567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:18.921 [2024-10-28 18:09:35.311578] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:18.921 [2024-10-28 18:09:35.311591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:18.921 [2024-10-28 18:09:35.311603] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:18.921 [2024-10-28 18:09:35.311615] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:18.921 [2024-10-28 18:09:35.311628] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:18.921 [2024-10-28 18:09:35.311644] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:18.921 [2024-10-28 18:09:35.311656] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:18.921 [2024-10-28 18:09:35.311669] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:18.921 [2024-10-28 18:09:35.311681] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:18.921 [2024-10-28 18:09:35.311696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.921 [2024-10-28 18:09:35.311707] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:18.921 [2024-10-28 18:09:35.311721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.197 ms 00:19:18.921 [2024-10-28 18:09:35.311732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.921 [2024-10-28 18:09:35.345612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.921 [2024-10-28 18:09:35.345922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:18.921 [2024-10-28 18:09:35.346078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.796 ms 00:19:18.921 [2024-10-28 18:09:35.346229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:18.921 [2024-10-28 18:09:35.346494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:18.921 [2024-10-28 18:09:35.346568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:18.921 [2024-10-28 18:09:35.346714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:18.921 [2024-10-28 18:09:35.346772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.389703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.389969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:19.199 [2024-10-28 18:09:35.390140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.840 ms 00:19:19.199 [2024-10-28 18:09:35.390274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.390487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.390559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:19.199 [2024-10-28 18:09:35.390690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:19.199 [2024-10-28 18:09:35.390748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.391223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.391369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:19.199 [2024-10-28 18:09:35.391526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.317 ms 00:19:19.199 [2024-10-28 18:09:35.391650] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.391902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.391972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:19.199 [2024-10-28 18:09:35.392152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.160 ms 00:19:19.199 [2024-10-28 18:09:35.392210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.412073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.412286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:19.199 [2024-10-28 18:09:35.412415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.794 ms 00:19:19.199 [2024-10-28 18:09:35.412540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.428324] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:19.199 [2024-10-28 18:09:35.428532] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:19.199 [2024-10-28 18:09:35.428708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.428955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:19.199 [2024-10-28 18:09:35.429030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.975 ms 00:19:19.199 [2024-10-28 18:09:35.429191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.457065] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.457278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:19.199 [2024-10-28 18:09:35.457410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.705 ms 00:19:19.199 [2024-10-28 18:09:35.457467] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.473264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.473426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:19.199 [2024-10-28 18:09:35.473574] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.470 ms 00:19:19.199 [2024-10-28 18:09:35.473600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.487828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.487881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:19.199 [2024-10-28 18:09:35.487922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.119 ms 00:19:19.199 [2024-10-28 18:09:35.487934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.488775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.488839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:19.199 [2024-10-28 18:09:35.488920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.713 ms 00:19:19.199 [2024-10-28 18:09:35.488935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.579252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.579328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:19.199 [2024-10-28 18:09:35.579376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 90.277 ms 00:19:19.199 [2024-10-28 18:09:35.579390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.591879] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:19.199 [2024-10-28 18:09:35.606468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.606818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:19.199 [2024-10-28 18:09:35.606874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.888 ms 00:19:19.199 [2024-10-28 18:09:35.606896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.607049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.607079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:19.199 [2024-10-28 18:09:35.607095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:19.199 [2024-10-28 18:09:35.607113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.607181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.607206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:19.199 [2024-10-28 18:09:35.607222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:19.199 [2024-10-28 18:09:35.607240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.199 [2024-10-28 18:09:35.607280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.199 [2024-10-28 18:09:35.607302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:19.199 [2024-10-28 18:09:35.607317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:19.199 [2024-10-28 18:09:35.607338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.200 [2024-10-28 18:09:35.607390] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:19.200 [2024-10-28 18:09:35.607421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.200 [2024-10-28 18:09:35.607434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:19.200 [2024-10-28 18:09:35.607460] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:19:19.200 [2024-10-28 18:09:35.607473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.200 [2024-10-28 18:09:35.640250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.200 [2024-10-28 18:09:35.640474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:19.200 [2024-10-28 18:09:35.640529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.726 ms 00:19:19.200 [2024-10-28 18:09:35.640544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.200 [2024-10-28 18:09:35.640691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.200 [2024-10-28 18:09:35.640713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:19.200 [2024-10-28 18:09:35.640730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.045 ms 00:19:19.200 [2024-10-28 18:09:35.640745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.200 [2024-10-28 18:09:35.641864] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:19.200 [2024-10-28 18:09:35.646157] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 365.106 ms, result 0 00:19:19.200 [2024-10-28 18:09:35.647272] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:19.200 Some configs were skipped because the RPC state that can call them passed over. 00:19:19.458 18:09:35 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:19.716 [2024-10-28 18:09:35.993914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.716 [2024-10-28 18:09:35.994220] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:19.716 [2024-10-28 18:09:35.994385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.631 ms 00:19:19.716 [2024-10-28 18:09:35.994425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.716 [2024-10-28 18:09:35.994509] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 2.205 ms, result 0 00:19:19.716 true 00:19:19.716 18:09:36 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:19.974 [2024-10-28 18:09:36.345891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:19.974 [2024-10-28 18:09:36.346131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:19.974 [2024-10-28 18:09:36.346178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.099 ms 00:19:19.974 [2024-10-28 18:09:36.346194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:19.974 [2024-10-28 18:09:36.346270] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.477 ms, result 0 00:19:19.974 true 00:19:19.974 18:09:36 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 75623 00:19:19.974 18:09:36 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75623 ']' 00:19:19.974 18:09:36 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75623 00:19:19.974 18:09:36 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:19:19.974 18:09:36 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:19.974 18:09:36 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75623 00:19:19.974 killing process with pid 75623 00:19:19.974 18:09:36 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:19.974 18:09:36 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:19.974 18:09:36 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75623' 00:19:19.974 18:09:36 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75623 00:19:19.974 18:09:36 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75623 00:19:20.909 [2024-10-28 18:09:37.366420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-10-28 18:09:37.366502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:20.909 [2024-10-28 18:09:37.366525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:20.909 [2024-10-28 18:09:37.366540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-10-28 18:09:37.366574] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:20.909 [2024-10-28 18:09:37.369932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-10-28 18:09:37.369974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:20.909 [2024-10-28 18:09:37.369996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.331 ms 00:19:20.909 [2024-10-28 18:09:37.370009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-10-28 18:09:37.370316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-10-28 18:09:37.370336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:20.909 [2024-10-28 18:09:37.370351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:19:20.909 [2024-10-28 18:09:37.370364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-10-28 18:09:37.374498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-10-28 18:09:37.374543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:20.909 [2024-10-28 18:09:37.374568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.104 ms 00:19:20.909 [2024-10-28 18:09:37.374581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:20.909 [2024-10-28 18:09:37.382167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:20.909 [2024-10-28 18:09:37.382211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:20.909 [2024-10-28 18:09:37.382230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.534 ms 00:19:20.909 [2024-10-28 18:09:37.382243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.169 [2024-10-28 18:09:37.395004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.169 [2024-10-28 18:09:37.395057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:21.169 [2024-10-28 18:09:37.395082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.685 ms 00:19:21.169 [2024-10-28 18:09:37.395106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.169 [2024-10-28 18:09:37.403653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.169 [2024-10-28 18:09:37.403711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:21.169 [2024-10-28 18:09:37.403737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.484 ms 00:19:21.169 [2024-10-28 18:09:37.403750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.169 [2024-10-28 18:09:37.403942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.169 [2024-10-28 18:09:37.403966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:21.169 [2024-10-28 18:09:37.403982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.120 ms 00:19:21.169 [2024-10-28 18:09:37.403994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.169 [2024-10-28 18:09:37.417217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.169 [2024-10-28 18:09:37.417477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:21.169 [2024-10-28 18:09:37.417518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.181 ms 00:19:21.169 [2024-10-28 18:09:37.417532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.169 [2024-10-28 18:09:37.430472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.169 [2024-10-28 18:09:37.430540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:21.169 [2024-10-28 18:09:37.430573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.811 ms 00:19:21.170 [2024-10-28 18:09:37.430587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.170 [2024-10-28 18:09:37.442986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.170 [2024-10-28 18:09:37.443048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:21.170 [2024-10-28 18:09:37.443079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.326 ms 00:19:21.170 [2024-10-28 18:09:37.443093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.170 [2024-10-28 18:09:37.455582] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.170 [2024-10-28 18:09:37.455668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:21.170 [2024-10-28 18:09:37.455699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.378 ms 00:19:21.170 [2024-10-28 18:09:37.455713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.170 [2024-10-28 18:09:37.455777] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:21.170 [2024-10-28 18:09:37.455805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.455828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.455867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.455891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.455906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.455942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.455958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.455978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.455993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456197] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456707] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.456989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:21.170 [2024-10-28 18:09:37.457251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:21.171 [2024-10-28 18:09:37.457537] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:21.171 [2024-10-28 18:09:37.457580] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cdd57b4-e896-4e97-a9c9-3c575802f024 00:19:21.171 [2024-10-28 18:09:37.457611] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:21.171 [2024-10-28 18:09:37.457637] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:21.171 [2024-10-28 18:09:37.457650] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:21.171 [2024-10-28 18:09:37.457668] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:21.171 [2024-10-28 18:09:37.457687] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:21.171 [2024-10-28 18:09:37.457704] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:21.171 [2024-10-28 18:09:37.457717] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:21.171 [2024-10-28 18:09:37.457733] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:21.171 [2024-10-28 18:09:37.457745] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:21.171 [2024-10-28 18:09:37.457759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.171 [2024-10-28 18:09:37.457772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:21.171 [2024-10-28 18:09:37.457787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.992 ms 00:19:21.171 [2024-10-28 18:09:37.457799] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.171 [2024-10-28 18:09:37.474632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.171 [2024-10-28 18:09:37.474710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:21.171 [2024-10-28 18:09:37.474737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.742 ms 00:19:21.171 [2024-10-28 18:09:37.474750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.171 [2024-10-28 18:09:37.475359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:21.171 [2024-10-28 18:09:37.475397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:21.171 [2024-10-28 18:09:37.475417] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.417 ms 00:19:21.171 [2024-10-28 18:09:37.475433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.171 [2024-10-28 18:09:37.533788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.171 [2024-10-28 18:09:37.533885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:21.171 [2024-10-28 18:09:37.533924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.171 [2024-10-28 18:09:37.533937] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.171 [2024-10-28 18:09:37.534091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.171 [2024-10-28 18:09:37.534110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:21.171 [2024-10-28 18:09:37.534125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.171 [2024-10-28 18:09:37.534141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.171 [2024-10-28 18:09:37.534229] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.171 [2024-10-28 18:09:37.534249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:21.171 [2024-10-28 18:09:37.534268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.171 [2024-10-28 18:09:37.534280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.171 [2024-10-28 18:09:37.534309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.171 [2024-10-28 18:09:37.534324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:21.171 [2024-10-28 18:09:37.534338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.171 [2024-10-28 18:09:37.534350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.171 [2024-10-28 18:09:37.638462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.171 [2024-10-28 18:09:37.638539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:21.171 [2024-10-28 18:09:37.638563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.171 [2024-10-28 18:09:37.638576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.429 [2024-10-28 18:09:37.724532] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.429 [2024-10-28 18:09:37.724607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:21.429 [2024-10-28 18:09:37.724631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.429 [2024-10-28 18:09:37.724647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.429 [2024-10-28 18:09:37.724763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.429 [2024-10-28 18:09:37.724784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:21.429 [2024-10-28 18:09:37.724803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.429 [2024-10-28 18:09:37.724815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.429 [2024-10-28 18:09:37.724886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.429 [2024-10-28 18:09:37.724914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:21.429 [2024-10-28 18:09:37.724930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.429 [2024-10-28 18:09:37.724944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.429 [2024-10-28 18:09:37.725081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.430 [2024-10-28 18:09:37.725101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:21.430 [2024-10-28 18:09:37.725116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.430 [2024-10-28 18:09:37.725128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.430 [2024-10-28 18:09:37.725186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.430 [2024-10-28 18:09:37.725206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:21.430 [2024-10-28 18:09:37.725221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.430 [2024-10-28 18:09:37.725233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.430 [2024-10-28 18:09:37.725284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.430 [2024-10-28 18:09:37.725303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:21.430 [2024-10-28 18:09:37.725320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.430 [2024-10-28 18:09:37.725332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.430 [2024-10-28 18:09:37.725390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:21.430 [2024-10-28 18:09:37.725408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:21.430 [2024-10-28 18:09:37.725423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:21.430 [2024-10-28 18:09:37.725434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:21.430 [2024-10-28 18:09:37.725613] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 359.157 ms, result 0 00:19:22.364 18:09:38 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:22.364 18:09:38 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:22.364 [2024-10-28 18:09:38.728698] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:19:22.364 [2024-10-28 18:09:38.728908] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75687 ] 00:19:22.622 [2024-10-28 18:09:38.909748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.622 [2024-10-28 18:09:39.012256] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.880 [2024-10-28 18:09:39.329681] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:22.880 [2024-10-28 18:09:39.329769] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:23.140 [2024-10-28 18:09:39.492223] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.492298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:23.140 [2024-10-28 18:09:39.492322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:23.140 [2024-10-28 18:09:39.492335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.495727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.495781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:23.140 [2024-10-28 18:09:39.495801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.359 ms 00:19:23.140 [2024-10-28 18:09:39.495814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.495985] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:23.140 [2024-10-28 18:09:39.496954] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:23.140 [2024-10-28 18:09:39.497002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.497018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:23.140 [2024-10-28 18:09:39.497032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.029 ms 00:19:23.140 [2024-10-28 18:09:39.497045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.498359] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:23.140 [2024-10-28 18:09:39.515078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.515154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:23.140 [2024-10-28 18:09:39.515178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.719 ms 00:19:23.140 [2024-10-28 18:09:39.515193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.515372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.515396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:23.140 [2024-10-28 18:09:39.515412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:23.140 [2024-10-28 18:09:39.515426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.520080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.520137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:23.140 [2024-10-28 18:09:39.520158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.588 ms 00:19:23.140 [2024-10-28 18:09:39.520171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.520329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.520353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:23.140 [2024-10-28 18:09:39.520367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.074 ms 00:19:23.140 [2024-10-28 18:09:39.520380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.520422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.520445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:23.140 [2024-10-28 18:09:39.520459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:23.140 [2024-10-28 18:09:39.520471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.520505] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:23.140 [2024-10-28 18:09:39.524866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.525054] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:23.140 [2024-10-28 18:09:39.525084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.371 ms 00:19:23.140 [2024-10-28 18:09:39.525099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.525184] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.525205] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:23.140 [2024-10-28 18:09:39.525219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:23.140 [2024-10-28 18:09:39.525231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.525270] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:23.140 [2024-10-28 18:09:39.525308] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:23.140 [2024-10-28 18:09:39.525354] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:23.140 [2024-10-28 18:09:39.525375] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:23.140 [2024-10-28 18:09:39.525492] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:23.140 [2024-10-28 18:09:39.525508] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:23.140 [2024-10-28 18:09:39.525524] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:23.140 [2024-10-28 18:09:39.525545] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:23.140 [2024-10-28 18:09:39.525583] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:23.140 [2024-10-28 18:09:39.525599] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:23.140 [2024-10-28 18:09:39.525611] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:23.140 [2024-10-28 18:09:39.525623] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:23.140 [2024-10-28 18:09:39.525635] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:23.140 [2024-10-28 18:09:39.525648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.525660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:23.140 [2024-10-28 18:09:39.525673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.382 ms 00:19:23.140 [2024-10-28 18:09:39.525685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.525816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.140 [2024-10-28 18:09:39.525864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:23.140 [2024-10-28 18:09:39.525890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:23.140 [2024-10-28 18:09:39.525903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.140 [2024-10-28 18:09:39.526023] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:23.140 [2024-10-28 18:09:39.526042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:23.140 [2024-10-28 18:09:39.526055] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.140 [2024-10-28 18:09:39.526069] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.140 [2024-10-28 18:09:39.526083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:23.140 [2024-10-28 18:09:39.526095] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:23.140 [2024-10-28 18:09:39.526111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:23.140 [2024-10-28 18:09:39.526124] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:23.140 [2024-10-28 18:09:39.526138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:23.140 [2024-10-28 18:09:39.526151] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.140 [2024-10-28 18:09:39.526164] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:23.140 [2024-10-28 18:09:39.526176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:23.140 [2024-10-28 18:09:39.526189] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:23.140 [2024-10-28 18:09:39.526217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:23.140 [2024-10-28 18:09:39.526230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:23.140 [2024-10-28 18:09:39.526243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.140 [2024-10-28 18:09:39.526255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:23.140 [2024-10-28 18:09:39.526267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:23.140 [2024-10-28 18:09:39.526280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.140 [2024-10-28 18:09:39.526292] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:23.140 [2024-10-28 18:09:39.526304] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:23.140 [2024-10-28 18:09:39.526317] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.140 [2024-10-28 18:09:39.526330] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:23.140 [2024-10-28 18:09:39.526342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:23.140 [2024-10-28 18:09:39.526355] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.140 [2024-10-28 18:09:39.526367] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:23.140 [2024-10-28 18:09:39.526381] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:23.140 [2024-10-28 18:09:39.526394] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.140 [2024-10-28 18:09:39.526406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:23.140 [2024-10-28 18:09:39.526418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:23.140 [2024-10-28 18:09:39.526431] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:23.140 [2024-10-28 18:09:39.526444] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:23.140 [2024-10-28 18:09:39.526456] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:23.140 [2024-10-28 18:09:39.526468] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.141 [2024-10-28 18:09:39.526480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:23.141 [2024-10-28 18:09:39.526493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:23.141 [2024-10-28 18:09:39.526506] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:23.141 [2024-10-28 18:09:39.526518] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:23.141 [2024-10-28 18:09:39.526530] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:23.141 [2024-10-28 18:09:39.526543] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.141 [2024-10-28 18:09:39.526555] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:23.141 [2024-10-28 18:09:39.526567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:23.141 [2024-10-28 18:09:39.526581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.141 [2024-10-28 18:09:39.526595] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:23.141 [2024-10-28 18:09:39.526617] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:23.141 [2024-10-28 18:09:39.526629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:23.141 [2024-10-28 18:09:39.526648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:23.141 [2024-10-28 18:09:39.526662] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:23.141 [2024-10-28 18:09:39.526675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:23.141 [2024-10-28 18:09:39.526687] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:23.141 [2024-10-28 18:09:39.526700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:23.141 [2024-10-28 18:09:39.526712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:23.141 [2024-10-28 18:09:39.526725] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:23.141 [2024-10-28 18:09:39.526740] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:23.141 [2024-10-28 18:09:39.526756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.141 [2024-10-28 18:09:39.526769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:23.141 [2024-10-28 18:09:39.526781] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:23.141 [2024-10-28 18:09:39.526793] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:23.141 [2024-10-28 18:09:39.526806] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:23.141 [2024-10-28 18:09:39.526818] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:23.141 [2024-10-28 18:09:39.526830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:23.141 [2024-10-28 18:09:39.526860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:23.141 [2024-10-28 18:09:39.526873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:23.141 [2024-10-28 18:09:39.526885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:23.141 [2024-10-28 18:09:39.526898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:23.141 [2024-10-28 18:09:39.526910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:23.141 [2024-10-28 18:09:39.526922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:23.141 [2024-10-28 18:09:39.526934] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:23.141 [2024-10-28 18:09:39.526946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:23.141 [2024-10-28 18:09:39.526958] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:23.141 [2024-10-28 18:09:39.526972] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:23.141 [2024-10-28 18:09:39.526986] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:23.141 [2024-10-28 18:09:39.526999] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:23.141 [2024-10-28 18:09:39.527011] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:23.141 [2024-10-28 18:09:39.527024] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:23.141 [2024-10-28 18:09:39.527037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.141 [2024-10-28 18:09:39.527049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:23.141 [2024-10-28 18:09:39.527068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.087 ms 00:19:23.141 [2024-10-28 18:09:39.527081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.141 [2024-10-28 18:09:39.560471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.141 [2024-10-28 18:09:39.560683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:23.141 [2024-10-28 18:09:39.560719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.315 ms 00:19:23.141 [2024-10-28 18:09:39.560735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.141 [2024-10-28 18:09:39.560959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.141 [2024-10-28 18:09:39.560991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:23.141 [2024-10-28 18:09:39.561005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:19:23.141 [2024-10-28 18:09:39.561018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.141 [2024-10-28 18:09:39.613804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.141 [2024-10-28 18:09:39.613886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:23.141 [2024-10-28 18:09:39.613910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 52.750 ms 00:19:23.141 [2024-10-28 18:09:39.613932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.141 [2024-10-28 18:09:39.614107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.141 [2024-10-28 18:09:39.614128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:23.141 [2024-10-28 18:09:39.614143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:23.141 [2024-10-28 18:09:39.614155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.141 [2024-10-28 18:09:39.614486] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.141 [2024-10-28 18:09:39.614513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:23.141 [2024-10-28 18:09:39.614528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.296 ms 00:19:23.141 [2024-10-28 18:09:39.614547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.141 [2024-10-28 18:09:39.614712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.141 [2024-10-28 18:09:39.614733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:23.141 [2024-10-28 18:09:39.614747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:23.141 [2024-10-28 18:09:39.614759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.631896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.632119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:23.399 [2024-10-28 18:09:39.632155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.103 ms 00:19:23.399 [2024-10-28 18:09:39.632170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.649185] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:23.399 [2024-10-28 18:09:39.649238] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:23.399 [2024-10-28 18:09:39.649259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.649271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:23.399 [2024-10-28 18:09:39.649286] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.879 ms 00:19:23.399 [2024-10-28 18:09:39.649298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.680355] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.680435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:23.399 [2024-10-28 18:09:39.680457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.942 ms 00:19:23.399 [2024-10-28 18:09:39.680470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.696634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.696687] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:23.399 [2024-10-28 18:09:39.696707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.985 ms 00:19:23.399 [2024-10-28 18:09:39.696720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.712993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.713037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:23.399 [2024-10-28 18:09:39.713054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.166 ms 00:19:23.399 [2024-10-28 18:09:39.713066] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.713986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.714023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:23.399 [2024-10-28 18:09:39.714040] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.768 ms 00:19:23.399 [2024-10-28 18:09:39.714053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.786289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.786579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:23.399 [2024-10-28 18:09:39.786613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.179 ms 00:19:23.399 [2024-10-28 18:09:39.786628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.798345] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:23.399 [2024-10-28 18:09:39.812565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.812637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:23.399 [2024-10-28 18:09:39.812658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.768 ms 00:19:23.399 [2024-10-28 18:09:39.812671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.812818] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.812881] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:23.399 [2024-10-28 18:09:39.812914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:23.399 [2024-10-28 18:09:39.812926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.812995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.813011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:23.399 [2024-10-28 18:09:39.813024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:23.399 [2024-10-28 18:09:39.813036] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.813076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.813095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:23.399 [2024-10-28 18:09:39.813107] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:23.399 [2024-10-28 18:09:39.813122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.813161] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:23.399 [2024-10-28 18:09:39.813177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.813188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:23.399 [2024-10-28 18:09:39.813215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:23.399 [2024-10-28 18:09:39.813244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.841788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.841848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:23.399 [2024-10-28 18:09:39.841869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.481 ms 00:19:23.399 [2024-10-28 18:09:39.841883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.842032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:23.399 [2024-10-28 18:09:39.842070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:23.399 [2024-10-28 18:09:39.842100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:19:23.399 [2024-10-28 18:09:39.842130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:23.399 [2024-10-28 18:09:39.843256] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:23.399 [2024-10-28 18:09:39.847184] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 350.633 ms, result 0 00:19:23.399 [2024-10-28 18:09:39.848116] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:23.399 [2024-10-28 18:09:39.863787] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:24.773  [2024-10-28T18:09:42.185Z] Copying: 25/256 [MB] (25 MBps) [2024-10-28T18:09:43.119Z] Copying: 49/256 [MB] (23 MBps) [2024-10-28T18:09:44.080Z] Copying: 73/256 [MB] (23 MBps) [2024-10-28T18:09:45.014Z] Copying: 96/256 [MB] (22 MBps) [2024-10-28T18:09:45.947Z] Copying: 120/256 [MB] (24 MBps) [2024-10-28T18:09:46.881Z] Copying: 143/256 [MB] (23 MBps) [2024-10-28T18:09:48.255Z] Copying: 167/256 [MB] (23 MBps) [2024-10-28T18:09:49.188Z] Copying: 189/256 [MB] (22 MBps) [2024-10-28T18:09:50.119Z] Copying: 212/256 [MB] (23 MBps) [2024-10-28T18:09:51.054Z] Copying: 235/256 [MB] (22 MBps) [2024-10-28T18:09:51.054Z] Copying: 256/256 [MB] (average 23 MBps)[2024-10-28 18:09:50.733650] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:34.576 [2024-10-28 18:09:50.746374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.746571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:34.576 [2024-10-28 18:09:50.746719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:34.576 [2024-10-28 18:09:50.746886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.746971] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:34.576 [2024-10-28 18:09:50.750509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.750676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:34.576 [2024-10-28 18:09:50.750884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:19:34.576 [2024-10-28 18:09:50.750941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.751353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.751496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:34.576 [2024-10-28 18:09:50.751619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.273 ms 00:19:34.576 [2024-10-28 18:09:50.751672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.755442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.755621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:34.576 [2024-10-28 18:09:50.755752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.635 ms 00:19:34.576 [2024-10-28 18:09:50.755882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.763342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.763498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:34.576 [2024-10-28 18:09:50.763629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.330 ms 00:19:34.576 [2024-10-28 18:09:50.763739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.795197] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.795394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:34.576 [2024-10-28 18:09:50.795535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.311 ms 00:19:34.576 [2024-10-28 18:09:50.795589] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.813386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.813454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:34.576 [2024-10-28 18:09:50.813475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.585 ms 00:19:34.576 [2024-10-28 18:09:50.813502] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.813697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.813721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:34.576 [2024-10-28 18:09:50.813736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.111 ms 00:19:34.576 [2024-10-28 18:09:50.813749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.846787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.847026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:34.576 [2024-10-28 18:09:50.847074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.986 ms 00:19:34.576 [2024-10-28 18:09:50.847090] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.879669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.879731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:34.576 [2024-10-28 18:09:50.879754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.503 ms 00:19:34.576 [2024-10-28 18:09:50.879768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.911906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.911988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:34.576 [2024-10-28 18:09:50.912026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.030 ms 00:19:34.576 [2024-10-28 18:09:50.912039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.944380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.576 [2024-10-28 18:09:50.944453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:34.576 [2024-10-28 18:09:50.944475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.199 ms 00:19:34.576 [2024-10-28 18:09:50.944489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.576 [2024-10-28 18:09:50.944621] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:34.576 [2024-10-28 18:09:50.944650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944817] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.944993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.945009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.945022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.945039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.945051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.945064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.945076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.945089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:34.576 [2024-10-28 18:09:50.945102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945862] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945887] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945944] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.945982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:34.577 [2024-10-28 18:09:50.946004] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:34.577 [2024-10-28 18:09:50.946017] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cdd57b4-e896-4e97-a9c9-3c575802f024 00:19:34.577 [2024-10-28 18:09:50.946029] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:34.577 [2024-10-28 18:09:50.946040] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:34.577 [2024-10-28 18:09:50.946052] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:34.577 [2024-10-28 18:09:50.946064] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:34.577 [2024-10-28 18:09:50.946076] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:34.577 [2024-10-28 18:09:50.946088] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:34.577 [2024-10-28 18:09:50.946100] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:34.577 [2024-10-28 18:09:50.946110] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:34.577 [2024-10-28 18:09:50.946121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:34.577 [2024-10-28 18:09:50.946133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.577 [2024-10-28 18:09:50.946159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:34.577 [2024-10-28 18:09:50.946172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.513 ms 00:19:34.577 [2024-10-28 18:09:50.946185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.577 [2024-10-28 18:09:50.963074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.577 [2024-10-28 18:09:50.963124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:34.577 [2024-10-28 18:09:50.963160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.856 ms 00:19:34.577 [2024-10-28 18:09:50.963173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.577 [2024-10-28 18:09:50.963675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:34.577 [2024-10-28 18:09:50.963711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:34.577 [2024-10-28 18:09:50.963728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.436 ms 00:19:34.577 [2024-10-28 18:09:50.963741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.577 [2024-10-28 18:09:51.011932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.577 [2024-10-28 18:09:51.012031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:34.577 [2024-10-28 18:09:51.012067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.577 [2024-10-28 18:09:51.012078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.577 [2024-10-28 18:09:51.012220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.577 [2024-10-28 18:09:51.012240] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:34.577 [2024-10-28 18:09:51.012253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.577 [2024-10-28 18:09:51.012279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.577 [2024-10-28 18:09:51.012350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.577 [2024-10-28 18:09:51.012370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:34.577 [2024-10-28 18:09:51.012382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.577 [2024-10-28 18:09:51.012394] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.578 [2024-10-28 18:09:51.012418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.578 [2024-10-28 18:09:51.012447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:34.578 [2024-10-28 18:09:51.012459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.578 [2024-10-28 18:09:51.012470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.836 [2024-10-28 18:09:51.109247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.836 [2024-10-28 18:09:51.109315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:34.836 [2024-10-28 18:09:51.109351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.836 [2024-10-28 18:09:51.109364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.836 [2024-10-28 18:09:51.193265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.836 [2024-10-28 18:09:51.193338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:34.836 [2024-10-28 18:09:51.193360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.836 [2024-10-28 18:09:51.193374] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.836 [2024-10-28 18:09:51.193512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.836 [2024-10-28 18:09:51.193555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:34.836 [2024-10-28 18:09:51.193573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.836 [2024-10-28 18:09:51.193595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.836 [2024-10-28 18:09:51.193668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.836 [2024-10-28 18:09:51.193685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:34.836 [2024-10-28 18:09:51.193706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.836 [2024-10-28 18:09:51.193718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.836 [2024-10-28 18:09:51.193872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.836 [2024-10-28 18:09:51.193895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:34.836 [2024-10-28 18:09:51.193909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.836 [2024-10-28 18:09:51.193923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.836 [2024-10-28 18:09:51.193982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.836 [2024-10-28 18:09:51.194012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:34.836 [2024-10-28 18:09:51.194052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.836 [2024-10-28 18:09:51.194083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.836 [2024-10-28 18:09:51.194147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.836 [2024-10-28 18:09:51.194165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:34.836 [2024-10-28 18:09:51.194178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.836 [2024-10-28 18:09:51.194198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.836 [2024-10-28 18:09:51.194264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:34.836 [2024-10-28 18:09:51.194480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:34.836 [2024-10-28 18:09:51.194528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:34.836 [2024-10-28 18:09:51.194543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:34.836 [2024-10-28 18:09:51.194727] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 448.342 ms, result 0 00:19:35.773 00:19:35.773 00:19:35.773 18:09:52 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:19:35.773 18:09:52 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:36.340 18:09:52 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:36.598 [2024-10-28 18:09:52.828935] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:19:36.598 [2024-10-28 18:09:52.829240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75831 ] 00:19:36.598 [2024-10-28 18:09:53.011319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.856 [2024-10-28 18:09:53.115774] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.116 [2024-10-28 18:09:53.424696] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.116 [2024-10-28 18:09:53.424789] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:37.116 [2024-10-28 18:09:53.588295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.116 [2024-10-28 18:09:53.588361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:37.116 [2024-10-28 18:09:53.588380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:37.116 [2024-10-28 18:09:53.588391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.116 [2024-10-28 18:09:53.592158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.116 [2024-10-28 18:09:53.592199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:37.116 [2024-10-28 18:09:53.592231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.740 ms 00:19:37.116 [2024-10-28 18:09:53.592241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.116 [2024-10-28 18:09:53.592417] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:37.375 [2024-10-28 18:09:53.593534] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:37.375 [2024-10-28 18:09:53.593577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.375 [2024-10-28 18:09:53.593592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:37.375 [2024-10-28 18:09:53.593604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.170 ms 00:19:37.375 [2024-10-28 18:09:53.593626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.375 [2024-10-28 18:09:53.595079] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:37.375 [2024-10-28 18:09:53.612056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.375 [2024-10-28 18:09:53.612108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:37.375 [2024-10-28 18:09:53.612143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.977 ms 00:19:37.375 [2024-10-28 18:09:53.612155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.375 [2024-10-28 18:09:53.612289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.375 [2024-10-28 18:09:53.612312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:37.375 [2024-10-28 18:09:53.612326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:37.375 [2024-10-28 18:09:53.612337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.375 [2024-10-28 18:09:53.617076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.375 [2024-10-28 18:09:53.617143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:37.375 [2024-10-28 18:09:53.617178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.678 ms 00:19:37.375 [2024-10-28 18:09:53.617202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.375 [2024-10-28 18:09:53.617362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.375 [2024-10-28 18:09:53.617385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:37.375 [2024-10-28 18:09:53.617398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.087 ms 00:19:37.375 [2024-10-28 18:09:53.617410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.375 [2024-10-28 18:09:53.617449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.375 [2024-10-28 18:09:53.617471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:37.375 [2024-10-28 18:09:53.617483] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:37.375 [2024-10-28 18:09:53.617494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.375 [2024-10-28 18:09:53.617528] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:37.375 [2024-10-28 18:09:53.621888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.375 [2024-10-28 18:09:53.622084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:37.375 [2024-10-28 18:09:53.622112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.371 ms 00:19:37.375 [2024-10-28 18:09:53.622125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.375 [2024-10-28 18:09:53.622209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.375 [2024-10-28 18:09:53.622228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:37.375 [2024-10-28 18:09:53.622241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:37.375 [2024-10-28 18:09:53.622252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.375 [2024-10-28 18:09:53.622310] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:37.375 [2024-10-28 18:09:53.622345] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:37.375 [2024-10-28 18:09:53.622390] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:37.376 [2024-10-28 18:09:53.622411] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:37.376 [2024-10-28 18:09:53.622523] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:37.376 [2024-10-28 18:09:53.622540] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:37.376 [2024-10-28 18:09:53.622555] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:37.376 [2024-10-28 18:09:53.622570] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:37.376 [2024-10-28 18:09:53.622588] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:37.376 [2024-10-28 18:09:53.622601] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:37.376 [2024-10-28 18:09:53.622612] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:37.376 [2024-10-28 18:09:53.622623] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:37.376 [2024-10-28 18:09:53.622634] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:37.376 [2024-10-28 18:09:53.622646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.376 [2024-10-28 18:09:53.622657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:37.376 [2024-10-28 18:09:53.622669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.340 ms 00:19:37.376 [2024-10-28 18:09:53.622680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.376 [2024-10-28 18:09:53.622782] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.376 [2024-10-28 18:09:53.622798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:37.376 [2024-10-28 18:09:53.622815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:37.376 [2024-10-28 18:09:53.622826] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.376 [2024-10-28 18:09:53.622971] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:37.376 [2024-10-28 18:09:53.622990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:37.376 [2024-10-28 18:09:53.623003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.376 [2024-10-28 18:09:53.623015] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623027] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:37.376 [2024-10-28 18:09:53.623037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:37.376 [2024-10-28 18:09:53.623059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:37.376 [2024-10-28 18:09:53.623072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623082] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.376 [2024-10-28 18:09:53.623093] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:37.376 [2024-10-28 18:09:53.623103] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:37.376 [2024-10-28 18:09:53.623114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:37.376 [2024-10-28 18:09:53.623140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:37.376 [2024-10-28 18:09:53.623151] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:37.376 [2024-10-28 18:09:53.623162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:37.376 [2024-10-28 18:09:53.623184] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:37.376 [2024-10-28 18:09:53.623195] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623206] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:37.376 [2024-10-28 18:09:53.623216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623227] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.376 [2024-10-28 18:09:53.623237] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:37.376 [2024-10-28 18:09:53.623248] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.376 [2024-10-28 18:09:53.623269] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:37.376 [2024-10-28 18:09:53.623279] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623290] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.376 [2024-10-28 18:09:53.623300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:37.376 [2024-10-28 18:09:53.623311] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:37.376 [2024-10-28 18:09:53.623332] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:37.376 [2024-10-28 18:09:53.623342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623353] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.376 [2024-10-28 18:09:53.623363] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:37.376 [2024-10-28 18:09:53.623374] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:37.376 [2024-10-28 18:09:53.623384] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:37.376 [2024-10-28 18:09:53.623394] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:37.376 [2024-10-28 18:09:53.623405] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:37.376 [2024-10-28 18:09:53.623415] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623426] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:37.376 [2024-10-28 18:09:53.623437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:37.376 [2024-10-28 18:09:53.623447] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623459] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:37.376 [2024-10-28 18:09:53.623471] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:37.376 [2024-10-28 18:09:53.623482] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:37.376 [2024-10-28 18:09:53.623498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:37.376 [2024-10-28 18:09:53.623510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:37.376 [2024-10-28 18:09:53.623522] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:37.376 [2024-10-28 18:09:53.623533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:37.376 [2024-10-28 18:09:53.623544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:37.376 [2024-10-28 18:09:53.623554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:37.376 [2024-10-28 18:09:53.623564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:37.376 [2024-10-28 18:09:53.623577] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:37.376 [2024-10-28 18:09:53.623591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.376 [2024-10-28 18:09:53.623604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:37.376 [2024-10-28 18:09:53.623615] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:37.376 [2024-10-28 18:09:53.623626] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:37.376 [2024-10-28 18:09:53.623638] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:37.376 [2024-10-28 18:09:53.623649] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:37.376 [2024-10-28 18:09:53.623661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:37.376 [2024-10-28 18:09:53.623672] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:37.376 [2024-10-28 18:09:53.623683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:37.376 [2024-10-28 18:09:53.623695] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:37.376 [2024-10-28 18:09:53.623706] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:37.376 [2024-10-28 18:09:53.623717] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:37.376 [2024-10-28 18:09:53.623728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:37.376 [2024-10-28 18:09:53.623740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:37.376 [2024-10-28 18:09:53.623752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:37.376 [2024-10-28 18:09:53.623763] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:37.376 [2024-10-28 18:09:53.623776] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:37.376 [2024-10-28 18:09:53.623789] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:37.376 [2024-10-28 18:09:53.623801] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:37.376 [2024-10-28 18:09:53.623812] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:37.376 [2024-10-28 18:09:53.623824] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:37.376 [2024-10-28 18:09:53.623853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.376 [2024-10-28 18:09:53.623867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:37.376 [2024-10-28 18:09:53.623886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.953 ms 00:19:37.376 [2024-10-28 18:09:53.623897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.376 [2024-10-28 18:09:53.658559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.376 [2024-10-28 18:09:53.658827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:37.376 [2024-10-28 18:09:53.658985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.576 ms 00:19:37.376 [2024-10-28 18:09:53.659117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.376 [2024-10-28 18:09:53.659357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.376 [2024-10-28 18:09:53.659512] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:37.377 [2024-10-28 18:09:53.659649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:37.377 [2024-10-28 18:09:53.659703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.377 [2024-10-28 18:09:53.710091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.377 [2024-10-28 18:09:53.710340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:37.377 [2024-10-28 18:09:53.710465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.286 ms 00:19:37.377 [2024-10-28 18:09:53.710525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.377 [2024-10-28 18:09:53.710815] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.377 [2024-10-28 18:09:53.710960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:37.377 [2024-10-28 18:09:53.711085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:37.377 [2024-10-28 18:09:53.711225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.377 [2024-10-28 18:09:53.711620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.377 [2024-10-28 18:09:53.711754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:37.377 [2024-10-28 18:09:53.711894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.311 ms 00:19:37.377 [2024-10-28 18:09:53.712009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.377 [2024-10-28 18:09:53.712296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.377 [2024-10-28 18:09:53.712429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:37.377 [2024-10-28 18:09:53.712545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:19:37.377 [2024-10-28 18:09:53.712654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.377 [2024-10-28 18:09:53.730187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.377 [2024-10-28 18:09:53.730389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:37.377 [2024-10-28 18:09:53.730515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.458 ms 00:19:37.377 [2024-10-28 18:09:53.730566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.377 [2024-10-28 18:09:53.747674] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:37.377 [2024-10-28 18:09:53.747906] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:37.377 [2024-10-28 18:09:53.748051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.377 [2024-10-28 18:09:53.748097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:37.377 [2024-10-28 18:09:53.748239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.186 ms 00:19:37.377 [2024-10-28 18:09:53.748302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.377 [2024-10-28 18:09:53.779407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.377 [2024-10-28 18:09:53.779698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:37.377 [2024-10-28 18:09:53.779822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.955 ms 00:19:37.377 [2024-10-28 18:09:53.779894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.377 [2024-10-28 18:09:53.797228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.377 [2024-10-28 18:09:53.797395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:37.377 [2024-10-28 18:09:53.797515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.062 ms 00:19:37.377 [2024-10-28 18:09:53.797675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.377 [2024-10-28 18:09:53.813716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.377 [2024-10-28 18:09:53.813764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:37.377 [2024-10-28 18:09:53.813781] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.917 ms 00:19:37.377 [2024-10-28 18:09:53.813793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.377 [2024-10-28 18:09:53.814657] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.377 [2024-10-28 18:09:53.814812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:37.377 [2024-10-28 18:09:53.814860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.683 ms 00:19:37.377 [2024-10-28 18:09:53.814876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-10-28 18:09:53.888133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-10-28 18:09:53.888225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:37.636 [2024-10-28 18:09:53.888264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.212 ms 00:19:37.636 [2024-10-28 18:09:53.888277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-10-28 18:09:53.901134] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:37.636 [2024-10-28 18:09:53.915270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-10-28 18:09:53.915347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:37.636 [2024-10-28 18:09:53.915367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.823 ms 00:19:37.636 [2024-10-28 18:09:53.915380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-10-28 18:09:53.915551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-10-28 18:09:53.915572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:37.636 [2024-10-28 18:09:53.915586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:37.636 [2024-10-28 18:09:53.915597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-10-28 18:09:53.915669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-10-28 18:09:53.915686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:37.636 [2024-10-28 18:09:53.915698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:19:37.636 [2024-10-28 18:09:53.915710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-10-28 18:09:53.915751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-10-28 18:09:53.915771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:37.636 [2024-10-28 18:09:53.915783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:37.636 [2024-10-28 18:09:53.915794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-10-28 18:09:53.915862] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:37.636 [2024-10-28 18:09:53.915883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-10-28 18:09:53.915895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:37.636 [2024-10-28 18:09:53.915907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:19:37.636 [2024-10-28 18:09:53.915918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-10-28 18:09:53.947558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-10-28 18:09:53.947642] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:37.636 [2024-10-28 18:09:53.947663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.603 ms 00:19:37.636 [2024-10-28 18:09:53.947675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-10-28 18:09:53.947879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.636 [2024-10-28 18:09:53.947902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:37.636 [2024-10-28 18:09:53.947916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.077 ms 00:19:37.636 [2024-10-28 18:09:53.947928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.636 [2024-10-28 18:09:53.948957] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:37.636 [2024-10-28 18:09:53.953254] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 360.265 ms, result 0 00:19:37.636 [2024-10-28 18:09:53.954146] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:37.636 [2024-10-28 18:09:53.970794] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:37.896  [2024-10-28T18:09:54.374Z] Copying: 4096/4096 [kB] (average 25 MBps)[2024-10-28 18:09:54.130063] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:37.896 [2024-10-28 18:09:54.142870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.896 [2024-10-28 18:09:54.142995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:37.896 [2024-10-28 18:09:54.143032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:37.896 [2024-10-28 18:09:54.143057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.896 [2024-10-28 18:09:54.143091] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:37.896 [2024-10-28 18:09:54.146491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.896 [2024-10-28 18:09:54.146526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:37.896 [2024-10-28 18:09:54.146557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.378 ms 00:19:37.896 [2024-10-28 18:09:54.146568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.896 [2024-10-28 18:09:54.148361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.896 [2024-10-28 18:09:54.148531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:37.896 [2024-10-28 18:09:54.148559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.734 ms 00:19:37.896 [2024-10-28 18:09:54.148573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.896 [2024-10-28 18:09:54.153383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.896 [2024-10-28 18:09:54.153464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:37.896 [2024-10-28 18:09:54.153493] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.775 ms 00:19:37.896 [2024-10-28 18:09:54.153514] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.896 [2024-10-28 18:09:54.162646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.896 [2024-10-28 18:09:54.162727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:37.896 [2024-10-28 18:09:54.162759] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.045 ms 00:19:37.896 [2024-10-28 18:09:54.162782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.896 [2024-10-28 18:09:54.196127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.896 [2024-10-28 18:09:54.196186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:37.896 [2024-10-28 18:09:54.196236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.224 ms 00:19:37.896 [2024-10-28 18:09:54.196263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.897 [2024-10-28 18:09:54.214278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.897 [2024-10-28 18:09:54.214479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:37.897 [2024-10-28 18:09:54.214520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.943 ms 00:19:37.897 [2024-10-28 18:09:54.214533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.897 [2024-10-28 18:09:54.214734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.897 [2024-10-28 18:09:54.214757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:37.897 [2024-10-28 18:09:54.214770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:37.897 [2024-10-28 18:09:54.214782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.897 [2024-10-28 18:09:54.249134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.897 [2024-10-28 18:09:54.249203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:37.897 [2024-10-28 18:09:54.249225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.306 ms 00:19:37.897 [2024-10-28 18:09:54.249236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.897 [2024-10-28 18:09:54.282466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.897 [2024-10-28 18:09:54.282670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:37.897 [2024-10-28 18:09:54.282700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.147 ms 00:19:37.897 [2024-10-28 18:09:54.282714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.897 [2024-10-28 18:09:54.314584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.897 [2024-10-28 18:09:54.314781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:37.897 [2024-10-28 18:09:54.314811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.751 ms 00:19:37.897 [2024-10-28 18:09:54.314824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.897 [2024-10-28 18:09:54.345758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.897 [2024-10-28 18:09:54.345804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:37.897 [2024-10-28 18:09:54.345822] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.807 ms 00:19:37.897 [2024-10-28 18:09:54.345849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.897 [2024-10-28 18:09:54.345922] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:37.897 [2024-10-28 18:09:54.345946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.345961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.345973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.345986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.345998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346144] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346318] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346383] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:37.897 [2024-10-28 18:09:54.346832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.346988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:37.898 [2024-10-28 18:09:54.347202] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:37.898 [2024-10-28 18:09:54.347215] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cdd57b4-e896-4e97-a9c9-3c575802f024 00:19:37.898 [2024-10-28 18:09:54.347226] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:37.898 [2024-10-28 18:09:54.347238] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:37.898 [2024-10-28 18:09:54.347248] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:37.898 [2024-10-28 18:09:54.347260] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:37.898 [2024-10-28 18:09:54.347270] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:37.898 [2024-10-28 18:09:54.347282] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:37.898 [2024-10-28 18:09:54.347293] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:37.898 [2024-10-28 18:09:54.347303] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:37.898 [2024-10-28 18:09:54.347313] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:37.898 [2024-10-28 18:09:54.347325] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.898 [2024-10-28 18:09:54.347342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:37.898 [2024-10-28 18:09:54.347355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.405 ms 00:19:37.898 [2024-10-28 18:09:54.347365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.898 [2024-10-28 18:09:54.364236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.898 [2024-10-28 18:09:54.364280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:37.898 [2024-10-28 18:09:54.364313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.844 ms 00:19:37.898 [2024-10-28 18:09:54.364324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:37.898 [2024-10-28 18:09:54.364796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:37.898 [2024-10-28 18:09:54.364821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:37.898 [2024-10-28 18:09:54.364866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.423 ms 00:19:37.898 [2024-10-28 18:09:54.364879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.157 [2024-10-28 18:09:54.409285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.157 [2024-10-28 18:09:54.409358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:38.157 [2024-10-28 18:09:54.409392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.157 [2024-10-28 18:09:54.409402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.157 [2024-10-28 18:09:54.409526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.157 [2024-10-28 18:09:54.409542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:38.157 [2024-10-28 18:09:54.409554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.157 [2024-10-28 18:09:54.409565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.157 [2024-10-28 18:09:54.409684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.157 [2024-10-28 18:09:54.409705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:38.157 [2024-10-28 18:09:54.409718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.157 [2024-10-28 18:09:54.409729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.157 [2024-10-28 18:09:54.409757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.157 [2024-10-28 18:09:54.409790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:38.157 [2024-10-28 18:09:54.409802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.157 [2024-10-28 18:09:54.409813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.157 [2024-10-28 18:09:54.506847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.157 [2024-10-28 18:09:54.507168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:38.157 [2024-10-28 18:09:54.507200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.157 [2024-10-28 18:09:54.507213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.157 [2024-10-28 18:09:54.594477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.157 [2024-10-28 18:09:54.594716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:38.157 [2024-10-28 18:09:54.594747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.157 [2024-10-28 18:09:54.594760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.157 [2024-10-28 18:09:54.594881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.157 [2024-10-28 18:09:54.594902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:38.157 [2024-10-28 18:09:54.594914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.157 [2024-10-28 18:09:54.594925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.157 [2024-10-28 18:09:54.594961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.157 [2024-10-28 18:09:54.594975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:38.157 [2024-10-28 18:09:54.594995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.157 [2024-10-28 18:09:54.595007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.157 [2024-10-28 18:09:54.595135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.157 [2024-10-28 18:09:54.595155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:38.157 [2024-10-28 18:09:54.595169] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.157 [2024-10-28 18:09:54.595180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.157 [2024-10-28 18:09:54.595234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.157 [2024-10-28 18:09:54.595252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:38.158 [2024-10-28 18:09:54.595264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.158 [2024-10-28 18:09:54.595282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.158 [2024-10-28 18:09:54.595359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.158 [2024-10-28 18:09:54.595374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:38.158 [2024-10-28 18:09:54.595385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.158 [2024-10-28 18:09:54.595396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.158 [2024-10-28 18:09:54.595446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:38.158 [2024-10-28 18:09:54.595463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:38.158 [2024-10-28 18:09:54.595480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:38.158 [2024-10-28 18:09:54.595491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:38.158 [2024-10-28 18:09:54.595668] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 452.817 ms, result 0 00:19:39.092 00:19:39.092 00:19:39.092 18:09:55 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=75866 00:19:39.092 18:09:55 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:39.092 18:09:55 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 75866 00:19:39.092 18:09:55 ftl.ftl_trim -- common/autotest_common.sh@833 -- # '[' -z 75866 ']' 00:19:39.092 18:09:55 ftl.ftl_trim -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.092 18:09:55 ftl.ftl_trim -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:39.092 18:09:55 ftl.ftl_trim -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.092 18:09:55 ftl.ftl_trim -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:39.092 18:09:55 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:39.350 [2024-10-28 18:09:55.651175] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:19:39.350 [2024-10-28 18:09:55.652075] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75866 ] 00:19:39.608 [2024-10-28 18:09:55.827649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.608 [2024-10-28 18:09:55.929811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:40.542 18:09:56 ftl.ftl_trim -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:19:40.542 18:09:56 ftl.ftl_trim -- common/autotest_common.sh@866 -- # return 0 00:19:40.542 18:09:56 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:40.542 [2024-10-28 18:09:57.006331] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.542 [2024-10-28 18:09:57.006422] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:40.802 [2024-10-28 18:09:57.208588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.802 [2024-10-28 18:09:57.208660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:40.802 [2024-10-28 18:09:57.208689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:40.802 [2024-10-28 18:09:57.208704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.802 [2024-10-28 18:09:57.212825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.802 [2024-10-28 18:09:57.212904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:40.802 [2024-10-28 18:09:57.212928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.088 ms 00:19:40.802 [2024-10-28 18:09:57.212941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.802 [2024-10-28 18:09:57.213314] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:40.802 [2024-10-28 18:09:57.214314] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:40.802 [2024-10-28 18:09:57.214357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.802 [2024-10-28 18:09:57.214371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:40.802 [2024-10-28 18:09:57.214386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.061 ms 00:19:40.802 [2024-10-28 18:09:57.214398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.802 [2024-10-28 18:09:57.215715] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:40.802 [2024-10-28 18:09:57.232890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.802 [2024-10-28 18:09:57.232978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:40.802 [2024-10-28 18:09:57.233001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.194 ms 00:19:40.802 [2024-10-28 18:09:57.233021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.802 [2024-10-28 18:09:57.233235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.802 [2024-10-28 18:09:57.233265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:40.802 [2024-10-28 18:09:57.233281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:19:40.802 [2024-10-28 18:09:57.233299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.802 [2024-10-28 18:09:57.238234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.802 [2024-10-28 18:09:57.238303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:40.802 [2024-10-28 18:09:57.238322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.844 ms 00:19:40.802 [2024-10-28 18:09:57.238340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.802 [2024-10-28 18:09:57.238566] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.802 [2024-10-28 18:09:57.238601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:40.802 [2024-10-28 18:09:57.238617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.134 ms 00:19:40.802 [2024-10-28 18:09:57.238635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.802 [2024-10-28 18:09:57.238688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.802 [2024-10-28 18:09:57.238712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:40.802 [2024-10-28 18:09:57.238727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:40.802 [2024-10-28 18:09:57.238744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.802 [2024-10-28 18:09:57.238783] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:40.802 [2024-10-28 18:09:57.243181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.802 [2024-10-28 18:09:57.243225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:40.802 [2024-10-28 18:09:57.243247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.401 ms 00:19:40.802 [2024-10-28 18:09:57.243261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.802 [2024-10-28 18:09:57.243383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.802 [2024-10-28 18:09:57.243403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:40.802 [2024-10-28 18:09:57.243422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:40.803 [2024-10-28 18:09:57.243441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.803 [2024-10-28 18:09:57.243480] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:40.803 [2024-10-28 18:09:57.243512] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:40.803 [2024-10-28 18:09:57.243577] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:40.803 [2024-10-28 18:09:57.243602] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:40.803 [2024-10-28 18:09:57.243719] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:40.803 [2024-10-28 18:09:57.243736] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:40.803 [2024-10-28 18:09:57.243760] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:40.803 [2024-10-28 18:09:57.243775] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:40.803 [2024-10-28 18:09:57.243791] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:40.803 [2024-10-28 18:09:57.243803] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:40.803 [2024-10-28 18:09:57.243816] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:40.803 [2024-10-28 18:09:57.243827] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:40.803 [2024-10-28 18:09:57.243868] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:40.803 [2024-10-28 18:09:57.243884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.803 [2024-10-28 18:09:57.243898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:40.803 [2024-10-28 18:09:57.243921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.416 ms 00:19:40.803 [2024-10-28 18:09:57.243935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.803 [2024-10-28 18:09:57.244040] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.803 [2024-10-28 18:09:57.244058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:40.803 [2024-10-28 18:09:57.244070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:40.803 [2024-10-28 18:09:57.244083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.803 [2024-10-28 18:09:57.244198] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:40.803 [2024-10-28 18:09:57.244217] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:40.803 [2024-10-28 18:09:57.244230] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.803 [2024-10-28 18:09:57.244243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:40.803 [2024-10-28 18:09:57.244269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:40.803 [2024-10-28 18:09:57.244302] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:40.803 [2024-10-28 18:09:57.244315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244334] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.803 [2024-10-28 18:09:57.244346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:40.803 [2024-10-28 18:09:57.244363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:40.803 [2024-10-28 18:09:57.244375] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:40.803 [2024-10-28 18:09:57.244392] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:40.803 [2024-10-28 18:09:57.244404] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:40.803 [2024-10-28 18:09:57.244421] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:40.803 [2024-10-28 18:09:57.244449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:40.803 [2024-10-28 18:09:57.244461] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244477] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:40.803 [2024-10-28 18:09:57.244506] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244523] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.803 [2024-10-28 18:09:57.244536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:40.803 [2024-10-28 18:09:57.244557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.803 [2024-10-28 18:09:57.244587] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:40.803 [2024-10-28 18:09:57.244599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244615] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.803 [2024-10-28 18:09:57.244628] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:40.803 [2024-10-28 18:09:57.244644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:40.803 [2024-10-28 18:09:57.244672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:40.803 [2024-10-28 18:09:57.244684] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.803 [2024-10-28 18:09:57.244712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:40.803 [2024-10-28 18:09:57.244727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:40.803 [2024-10-28 18:09:57.244739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:40.803 [2024-10-28 18:09:57.244752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:40.803 [2024-10-28 18:09:57.244763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:40.803 [2024-10-28 18:09:57.244777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244788] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:40.803 [2024-10-28 18:09:57.244800] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:40.803 [2024-10-28 18:09:57.244811] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244823] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:40.803 [2024-10-28 18:09:57.244852] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:40.803 [2024-10-28 18:09:57.244869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:40.803 [2024-10-28 18:09:57.244880] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:40.803 [2024-10-28 18:09:57.244894] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:40.803 [2024-10-28 18:09:57.244905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:40.803 [2024-10-28 18:09:57.244917] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:40.803 [2024-10-28 18:09:57.244928] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:40.803 [2024-10-28 18:09:57.244949] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:40.803 [2024-10-28 18:09:57.244961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:40.803 [2024-10-28 18:09:57.244983] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:40.803 [2024-10-28 18:09:57.244997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.803 [2024-10-28 18:09:57.245014] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:40.803 [2024-10-28 18:09:57.245026] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:40.803 [2024-10-28 18:09:57.245040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:40.803 [2024-10-28 18:09:57.245051] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:40.803 [2024-10-28 18:09:57.245066] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:40.803 [2024-10-28 18:09:57.245078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:40.803 [2024-10-28 18:09:57.245097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:40.803 [2024-10-28 18:09:57.245110] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:40.803 [2024-10-28 18:09:57.245127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:40.803 [2024-10-28 18:09:57.245140] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:40.803 [2024-10-28 18:09:57.245157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:40.803 [2024-10-28 18:09:57.245170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:40.803 [2024-10-28 18:09:57.245187] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:40.803 [2024-10-28 18:09:57.245201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:40.803 [2024-10-28 18:09:57.245218] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:40.803 [2024-10-28 18:09:57.245232] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:40.803 [2024-10-28 18:09:57.245254] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:40.803 [2024-10-28 18:09:57.245267] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:40.803 [2024-10-28 18:09:57.245284] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:40.803 [2024-10-28 18:09:57.245297] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:40.803 [2024-10-28 18:09:57.245317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.803 [2024-10-28 18:09:57.245330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:40.803 [2024-10-28 18:09:57.245348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.183 ms 00:19:40.803 [2024-10-28 18:09:57.245360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.281516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.281577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:41.063 [2024-10-28 18:09:57.281604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.060 ms 00:19:41.063 [2024-10-28 18:09:57.281625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.281859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.281882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:41.063 [2024-10-28 18:09:57.281914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:19:41.063 [2024-10-28 18:09:57.281927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.324970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.325050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:41.063 [2024-10-28 18:09:57.325078] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.999 ms 00:19:41.063 [2024-10-28 18:09:57.325092] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.325265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.325286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:41.063 [2024-10-28 18:09:57.325306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:41.063 [2024-10-28 18:09:57.325319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.325683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.325708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:41.063 [2024-10-28 18:09:57.325737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.325 ms 00:19:41.063 [2024-10-28 18:09:57.325750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.325930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.325951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:41.063 [2024-10-28 18:09:57.325977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.144 ms 00:19:41.063 [2024-10-28 18:09:57.325990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.345278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.345345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:41.063 [2024-10-28 18:09:57.345373] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.246 ms 00:19:41.063 [2024-10-28 18:09:57.345388] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.362553] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:41.063 [2024-10-28 18:09:57.362632] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:41.063 [2024-10-28 18:09:57.362663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.362679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:41.063 [2024-10-28 18:09:57.362701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.087 ms 00:19:41.063 [2024-10-28 18:09:57.362715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.393430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.393517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:41.063 [2024-10-28 18:09:57.393546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.476 ms 00:19:41.063 [2024-10-28 18:09:57.393561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.410149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.410225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:41.063 [2024-10-28 18:09:57.410258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.356 ms 00:19:41.063 [2024-10-28 18:09:57.410272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.426305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.426385] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:41.063 [2024-10-28 18:09:57.426414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.853 ms 00:19:41.063 [2024-10-28 18:09:57.426428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.427396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.427432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:41.063 [2024-10-28 18:09:57.427455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.748 ms 00:19:41.063 [2024-10-28 18:09:57.427469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.514673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.063 [2024-10-28 18:09:57.514756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:41.063 [2024-10-28 18:09:57.514782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 87.164 ms 00:19:41.063 [2024-10-28 18:09:57.514795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.063 [2024-10-28 18:09:57.527851] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:41.322 [2024-10-28 18:09:57.542092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.322 [2024-10-28 18:09:57.542179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:41.322 [2024-10-28 18:09:57.542203] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.112 ms 00:19:41.322 [2024-10-28 18:09:57.542217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.322 [2024-10-28 18:09:57.542399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.322 [2024-10-28 18:09:57.542423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:41.322 [2024-10-28 18:09:57.542437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:41.323 [2024-10-28 18:09:57.542464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.323 [2024-10-28 18:09:57.542536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.323 [2024-10-28 18:09:57.542563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:41.323 [2024-10-28 18:09:57.542577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:19:41.323 [2024-10-28 18:09:57.542594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.323 [2024-10-28 18:09:57.542635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.323 [2024-10-28 18:09:57.542656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:41.323 [2024-10-28 18:09:57.542670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:41.323 [2024-10-28 18:09:57.542691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.323 [2024-10-28 18:09:57.542742] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:41.323 [2024-10-28 18:09:57.542771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.323 [2024-10-28 18:09:57.542784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:41.323 [2024-10-28 18:09:57.542810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:19:41.323 [2024-10-28 18:09:57.542823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.323 [2024-10-28 18:09:57.575024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.323 [2024-10-28 18:09:57.575101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:41.323 [2024-10-28 18:09:57.575130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.104 ms 00:19:41.323 [2024-10-28 18:09:57.575145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.323 [2024-10-28 18:09:57.575364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.323 [2024-10-28 18:09:57.575386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:41.323 [2024-10-28 18:09:57.575414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:41.323 [2024-10-28 18:09:57.575434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.323 [2024-10-28 18:09:57.576501] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:41.323 [2024-10-28 18:09:57.580941] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 367.552 ms, result 0 00:19:41.323 [2024-10-28 18:09:57.582093] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:41.323 Some configs were skipped because the RPC state that can call them passed over. 00:19:41.323 18:09:57 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:41.581 [2024-10-28 18:09:57.936431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.581 [2024-10-28 18:09:57.936513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:41.581 [2024-10-28 18:09:57.936535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.481 ms 00:19:41.581 [2024-10-28 18:09:57.936553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.582 [2024-10-28 18:09:57.936610] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.669 ms, result 0 00:19:41.582 true 00:19:41.582 18:09:57 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:41.841 [2024-10-28 18:09:58.224525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:41.841 [2024-10-28 18:09:58.224592] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:41.841 [2024-10-28 18:09:58.224626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.160 ms 00:19:41.841 [2024-10-28 18:09:58.224640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:41.841 [2024-10-28 18:09:58.224730] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.368 ms, result 0 00:19:41.841 true 00:19:41.841 18:09:58 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 75866 00:19:41.841 18:09:58 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75866 ']' 00:19:41.841 18:09:58 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75866 00:19:41.841 18:09:58 ftl.ftl_trim -- common/autotest_common.sh@957 -- # uname 00:19:41.841 18:09:58 ftl.ftl_trim -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:19:41.841 18:09:58 ftl.ftl_trim -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 75866 00:19:41.841 18:09:58 ftl.ftl_trim -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:19:41.841 killing process with pid 75866 00:19:41.841 18:09:58 ftl.ftl_trim -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:19:41.841 18:09:58 ftl.ftl_trim -- common/autotest_common.sh@970 -- # echo 'killing process with pid 75866' 00:19:41.841 18:09:58 ftl.ftl_trim -- common/autotest_common.sh@971 -- # kill 75866 00:19:41.841 18:09:58 ftl.ftl_trim -- common/autotest_common.sh@976 -- # wait 75866 00:19:42.776 [2024-10-28 18:09:59.218416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.776 [2024-10-28 18:09:59.218496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:42.776 [2024-10-28 18:09:59.218517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:42.776 [2024-10-28 18:09:59.218532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.776 [2024-10-28 18:09:59.218566] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:42.776 [2024-10-28 18:09:59.221909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.776 [2024-10-28 18:09:59.221947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:42.776 [2024-10-28 18:09:59.221968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.314 ms 00:19:42.776 [2024-10-28 18:09:59.221980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.776 [2024-10-28 18:09:59.222298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.776 [2024-10-28 18:09:59.222327] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:42.776 [2024-10-28 18:09:59.222344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.257 ms 00:19:42.776 [2024-10-28 18:09:59.222355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.776 [2024-10-28 18:09:59.226471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.776 [2024-10-28 18:09:59.226516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:42.776 [2024-10-28 18:09:59.226538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.084 ms 00:19:42.776 [2024-10-28 18:09:59.226551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.776 [2024-10-28 18:09:59.234155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.776 [2024-10-28 18:09:59.234218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:42.776 [2024-10-28 18:09:59.234239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.534 ms 00:19:42.776 [2024-10-28 18:09:59.234251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:42.776 [2024-10-28 18:09:59.247316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:42.776 [2024-10-28 18:09:59.247401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:42.776 [2024-10-28 18:09:59.247427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.942 ms 00:19:42.776 [2024-10-28 18:09:59.247457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.036 [2024-10-28 18:09:59.256164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.036 [2024-10-28 18:09:59.256233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:43.036 [2024-10-28 18:09:59.256259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.568 ms 00:19:43.036 [2024-10-28 18:09:59.256271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.036 [2024-10-28 18:09:59.256458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.036 [2024-10-28 18:09:59.256478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:43.036 [2024-10-28 18:09:59.256494] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:19:43.036 [2024-10-28 18:09:59.256505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.036 [2024-10-28 18:09:59.270174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.036 [2024-10-28 18:09:59.270252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:43.036 [2024-10-28 18:09:59.270273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.626 ms 00:19:43.036 [2024-10-28 18:09:59.270285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.036 [2024-10-28 18:09:59.283395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.036 [2024-10-28 18:09:59.283472] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:43.036 [2024-10-28 18:09:59.283505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.962 ms 00:19:43.036 [2024-10-28 18:09:59.283518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.036 [2024-10-28 18:09:59.296312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.036 [2024-10-28 18:09:59.296380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:43.036 [2024-10-28 18:09:59.296412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.679 ms 00:19:43.036 [2024-10-28 18:09:59.296426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.036 [2024-10-28 18:09:59.309405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.036 [2024-10-28 18:09:59.309486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:43.036 [2024-10-28 18:09:59.309514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.816 ms 00:19:43.036 [2024-10-28 18:09:59.309527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.036 [2024-10-28 18:09:59.309635] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:43.036 [2024-10-28 18:09:59.309673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.309999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310647] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:43.036 [2024-10-28 18:09:59.310813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.310827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.310858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.310873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.310896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.310911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.310932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.310947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.310965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.310979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.310997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311199] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:43.037 [2024-10-28 18:09:59.311423] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:43.037 [2024-10-28 18:09:59.311453] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cdd57b4-e896-4e97-a9c9-3c575802f024 00:19:43.037 [2024-10-28 18:09:59.311484] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:43.037 [2024-10-28 18:09:59.311510] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:43.037 [2024-10-28 18:09:59.311523] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:43.037 [2024-10-28 18:09:59.311541] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:43.037 [2024-10-28 18:09:59.311553] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:43.037 [2024-10-28 18:09:59.311570] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:43.037 [2024-10-28 18:09:59.311583] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:43.037 [2024-10-28 18:09:59.311600] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:43.037 [2024-10-28 18:09:59.311611] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:43.037 [2024-10-28 18:09:59.311629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.037 [2024-10-28 18:09:59.311643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:43.037 [2024-10-28 18:09:59.311671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.001 ms 00:19:43.037 [2024-10-28 18:09:59.311684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.037 [2024-10-28 18:09:59.328663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.037 [2024-10-28 18:09:59.328722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:43.037 [2024-10-28 18:09:59.328754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.896 ms 00:19:43.037 [2024-10-28 18:09:59.328768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.037 [2024-10-28 18:09:59.329332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:43.037 [2024-10-28 18:09:59.329368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:43.037 [2024-10-28 18:09:59.329391] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.409 ms 00:19:43.037 [2024-10-28 18:09:59.329412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.037 [2024-10-28 18:09:59.388633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.037 [2024-10-28 18:09:59.388710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:43.037 [2024-10-28 18:09:59.388737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.037 [2024-10-28 18:09:59.388751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.037 [2024-10-28 18:09:59.388927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.037 [2024-10-28 18:09:59.388948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:43.037 [2024-10-28 18:09:59.388968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.037 [2024-10-28 18:09:59.388988] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.037 [2024-10-28 18:09:59.389085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.037 [2024-10-28 18:09:59.389105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:43.037 [2024-10-28 18:09:59.389131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.037 [2024-10-28 18:09:59.389144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.037 [2024-10-28 18:09:59.389179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.037 [2024-10-28 18:09:59.389194] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:43.037 [2024-10-28 18:09:59.389211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.037 [2024-10-28 18:09:59.389224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.037 [2024-10-28 18:09:59.493224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.037 [2024-10-28 18:09:59.493297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:43.037 [2024-10-28 18:09:59.493324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.037 [2024-10-28 18:09:59.493338] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.295 [2024-10-28 18:09:59.579459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.295 [2024-10-28 18:09:59.579536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:43.295 [2024-10-28 18:09:59.579563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.295 [2024-10-28 18:09:59.579583] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.295 [2024-10-28 18:09:59.579707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.295 [2024-10-28 18:09:59.579727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:43.295 [2024-10-28 18:09:59.579751] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.295 [2024-10-28 18:09:59.579764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.295 [2024-10-28 18:09:59.579809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.295 [2024-10-28 18:09:59.579825] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:43.295 [2024-10-28 18:09:59.579869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.295 [2024-10-28 18:09:59.579886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.295 [2024-10-28 18:09:59.580035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.295 [2024-10-28 18:09:59.580055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:43.296 [2024-10-28 18:09:59.580074] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.296 [2024-10-28 18:09:59.580087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.296 [2024-10-28 18:09:59.580156] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.296 [2024-10-28 18:09:59.580175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:43.296 [2024-10-28 18:09:59.580194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.296 [2024-10-28 18:09:59.580207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.296 [2024-10-28 18:09:59.580266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.296 [2024-10-28 18:09:59.580289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:43.296 [2024-10-28 18:09:59.580311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.296 [2024-10-28 18:09:59.580324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.296 [2024-10-28 18:09:59.580389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:43.296 [2024-10-28 18:09:59.580407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:43.296 [2024-10-28 18:09:59.580426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:43.296 [2024-10-28 18:09:59.580439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:43.296 [2024-10-28 18:09:59.580623] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 362.173 ms, result 0 00:19:44.231 18:10:00 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:44.231 [2024-10-28 18:10:00.591978] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:19:44.231 [2024-10-28 18:10:00.592134] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75930 ] 00:19:44.489 [2024-10-28 18:10:00.770969] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.489 [2024-10-28 18:10:00.894648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.748 [2024-10-28 18:10:01.217659] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:44.748 [2024-10-28 18:10:01.217748] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:45.007 [2024-10-28 18:10:01.379778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.007 [2024-10-28 18:10:01.379849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:45.008 [2024-10-28 18:10:01.379870] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:45.008 [2024-10-28 18:10:01.379882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.383214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.008 [2024-10-28 18:10:01.383260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:45.008 [2024-10-28 18:10:01.383276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.301 ms 00:19:45.008 [2024-10-28 18:10:01.383288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.383475] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:45.008 [2024-10-28 18:10:01.384455] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:45.008 [2024-10-28 18:10:01.384494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.008 [2024-10-28 18:10:01.384508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:45.008 [2024-10-28 18:10:01.384522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.032 ms 00:19:45.008 [2024-10-28 18:10:01.384532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.385950] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:45.008 [2024-10-28 18:10:01.402421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.008 [2024-10-28 18:10:01.402499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:45.008 [2024-10-28 18:10:01.402519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.469 ms 00:19:45.008 [2024-10-28 18:10:01.402532] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.402726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.008 [2024-10-28 18:10:01.402754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:45.008 [2024-10-28 18:10:01.402769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:45.008 [2024-10-28 18:10:01.402781] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.407386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.008 [2024-10-28 18:10:01.407680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:45.008 [2024-10-28 18:10:01.407714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.528 ms 00:19:45.008 [2024-10-28 18:10:01.407728] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.407933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.008 [2024-10-28 18:10:01.407959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:45.008 [2024-10-28 18:10:01.407973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:19:45.008 [2024-10-28 18:10:01.407985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.408026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.008 [2024-10-28 18:10:01.408047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:45.008 [2024-10-28 18:10:01.408059] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:45.008 [2024-10-28 18:10:01.408070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.408102] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:45.008 [2024-10-28 18:10:01.412378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.008 [2024-10-28 18:10:01.412421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:45.008 [2024-10-28 18:10:01.412437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.285 ms 00:19:45.008 [2024-10-28 18:10:01.412448] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.412530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.008 [2024-10-28 18:10:01.412548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:45.008 [2024-10-28 18:10:01.412560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:45.008 [2024-10-28 18:10:01.412572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.412605] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:45.008 [2024-10-28 18:10:01.412639] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:45.008 [2024-10-28 18:10:01.412683] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:45.008 [2024-10-28 18:10:01.412703] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:45.008 [2024-10-28 18:10:01.412817] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:45.008 [2024-10-28 18:10:01.412853] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:45.008 [2024-10-28 18:10:01.412873] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:45.008 [2024-10-28 18:10:01.412887] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:45.008 [2024-10-28 18:10:01.412906] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:45.008 [2024-10-28 18:10:01.412919] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:45.008 [2024-10-28 18:10:01.412930] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:45.008 [2024-10-28 18:10:01.412940] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:45.008 [2024-10-28 18:10:01.412950] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:45.008 [2024-10-28 18:10:01.412962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.008 [2024-10-28 18:10:01.412973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:45.008 [2024-10-28 18:10:01.412985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.361 ms 00:19:45.008 [2024-10-28 18:10:01.412995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.413128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.008 [2024-10-28 18:10:01.413146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:45.008 [2024-10-28 18:10:01.413163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:19:45.008 [2024-10-28 18:10:01.413174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.008 [2024-10-28 18:10:01.413289] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:45.008 [2024-10-28 18:10:01.413307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:45.008 [2024-10-28 18:10:01.413319] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.008 [2024-10-28 18:10:01.413330] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413342] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:45.008 [2024-10-28 18:10:01.413352] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:45.008 [2024-10-28 18:10:01.413374] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:45.008 [2024-10-28 18:10:01.413384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.008 [2024-10-28 18:10:01.413405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:45.008 [2024-10-28 18:10:01.413417] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:45.008 [2024-10-28 18:10:01.413427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:45.008 [2024-10-28 18:10:01.413452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:45.008 [2024-10-28 18:10:01.413463] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:45.008 [2024-10-28 18:10:01.413473] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413483] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:45.008 [2024-10-28 18:10:01.413493] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:45.008 [2024-10-28 18:10:01.413503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:45.008 [2024-10-28 18:10:01.413523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413533] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.008 [2024-10-28 18:10:01.413544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:45.008 [2024-10-28 18:10:01.413554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413563] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.008 [2024-10-28 18:10:01.413574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:45.008 [2024-10-28 18:10:01.413584] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.008 [2024-10-28 18:10:01.413604] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:45.008 [2024-10-28 18:10:01.413614] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413623] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:45.008 [2024-10-28 18:10:01.413634] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:45.008 [2024-10-28 18:10:01.413644] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413671] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.008 [2024-10-28 18:10:01.413683] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:45.008 [2024-10-28 18:10:01.413693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:45.008 [2024-10-28 18:10:01.413703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:45.008 [2024-10-28 18:10:01.413713] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:45.008 [2024-10-28 18:10:01.413723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:45.008 [2024-10-28 18:10:01.413733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:45.008 [2024-10-28 18:10:01.413754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:45.008 [2024-10-28 18:10:01.413765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413776] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:45.008 [2024-10-28 18:10:01.413787] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:45.008 [2024-10-28 18:10:01.413798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:45.008 [2024-10-28 18:10:01.413815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:45.008 [2024-10-28 18:10:01.413826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:45.008 [2024-10-28 18:10:01.413851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:45.009 [2024-10-28 18:10:01.413864] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:45.009 [2024-10-28 18:10:01.413875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:45.009 [2024-10-28 18:10:01.413885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:45.009 [2024-10-28 18:10:01.413896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:45.009 [2024-10-28 18:10:01.413908] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:45.009 [2024-10-28 18:10:01.413922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.009 [2024-10-28 18:10:01.413935] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:45.009 [2024-10-28 18:10:01.413946] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:45.009 [2024-10-28 18:10:01.413957] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:45.009 [2024-10-28 18:10:01.413968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:45.009 [2024-10-28 18:10:01.413979] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:45.009 [2024-10-28 18:10:01.413990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:45.009 [2024-10-28 18:10:01.414001] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:45.009 [2024-10-28 18:10:01.414012] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:45.009 [2024-10-28 18:10:01.414023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:45.009 [2024-10-28 18:10:01.414034] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:45.009 [2024-10-28 18:10:01.414045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:45.009 [2024-10-28 18:10:01.414056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:45.009 [2024-10-28 18:10:01.414067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:45.009 [2024-10-28 18:10:01.414078] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:45.009 [2024-10-28 18:10:01.414089] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:45.009 [2024-10-28 18:10:01.414113] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:45.009 [2024-10-28 18:10:01.414126] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:45.009 [2024-10-28 18:10:01.414137] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:45.009 [2024-10-28 18:10:01.414148] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:45.009 [2024-10-28 18:10:01.414159] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:45.009 [2024-10-28 18:10:01.414172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.009 [2024-10-28 18:10:01.414184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:45.009 [2024-10-28 18:10:01.414201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:19:45.009 [2024-10-28 18:10:01.414212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.009 [2024-10-28 18:10:01.447386] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.009 [2024-10-28 18:10:01.447664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:45.009 [2024-10-28 18:10:01.447809] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.100 ms 00:19:45.009 [2024-10-28 18:10:01.447882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.009 [2024-10-28 18:10:01.448208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.009 [2024-10-28 18:10:01.448355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:45.009 [2024-10-28 18:10:01.448474] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:19:45.009 [2024-10-28 18:10:01.448599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.494233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.494498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:45.268 [2024-10-28 18:10:01.494629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.540 ms 00:19:45.268 [2024-10-28 18:10:01.494759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.494996] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.495057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:45.268 [2024-10-28 18:10:01.495174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:45.268 [2024-10-28 18:10:01.495228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.495605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.495746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:45.268 [2024-10-28 18:10:01.495874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.309 ms 00:19:45.268 [2024-10-28 18:10:01.495992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.496198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.496264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:45.268 [2024-10-28 18:10:01.496431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.130 ms 00:19:45.268 [2024-10-28 18:10:01.496486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.513564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.513853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:45.268 [2024-10-28 18:10:01.513987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.941 ms 00:19:45.268 [2024-10-28 18:10:01.514039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.530699] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:19:45.268 [2024-10-28 18:10:01.531007] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:45.268 [2024-10-28 18:10:01.531144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.531164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:45.268 [2024-10-28 18:10:01.531179] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.872 ms 00:19:45.268 [2024-10-28 18:10:01.531190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.561936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.562053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:45.268 [2024-10-28 18:10:01.562075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.556 ms 00:19:45.268 [2024-10-28 18:10:01.562087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.578880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.578949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:45.268 [2024-10-28 18:10:01.578971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.614 ms 00:19:45.268 [2024-10-28 18:10:01.578983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.596168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.596238] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:45.268 [2024-10-28 18:10:01.596258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.019 ms 00:19:45.268 [2024-10-28 18:10:01.596270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.597247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.597389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:45.268 [2024-10-28 18:10:01.597416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:19:45.268 [2024-10-28 18:10:01.597429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.671901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.672185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:45.268 [2024-10-28 18:10:01.672217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.428 ms 00:19:45.268 [2024-10-28 18:10:01.672230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.685258] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:45.268 [2024-10-28 18:10:01.699380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.699615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:45.268 [2024-10-28 18:10:01.699649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.986 ms 00:19:45.268 [2024-10-28 18:10:01.699662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.699866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.699892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:45.268 [2024-10-28 18:10:01.699906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:19:45.268 [2024-10-28 18:10:01.699917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.699987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.700004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:45.268 [2024-10-28 18:10:01.700017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:45.268 [2024-10-28 18:10:01.700027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.700068] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.700088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:45.268 [2024-10-28 18:10:01.700100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:45.268 [2024-10-28 18:10:01.700111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.700150] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:45.268 [2024-10-28 18:10:01.700166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.700177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:45.268 [2024-10-28 18:10:01.700188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:19:45.268 [2024-10-28 18:10:01.700199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.731871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.731946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:45.268 [2024-10-28 18:10:01.731967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.636 ms 00:19:45.268 [2024-10-28 18:10:01.731979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.268 [2024-10-28 18:10:01.732191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:45.268 [2024-10-28 18:10:01.732213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:45.268 [2024-10-28 18:10:01.732227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.052 ms 00:19:45.269 [2024-10-28 18:10:01.732238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:45.269 [2024-10-28 18:10:01.733209] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:45.269 [2024-10-28 18:10:01.737621] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 353.068 ms, result 0 00:19:45.269 [2024-10-28 18:10:01.738540] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:45.527 [2024-10-28 18:10:01.755230] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:46.461  [2024-10-28T18:10:03.872Z] Copying: 29/256 [MB] (29 MBps) [2024-10-28T18:10:05.246Z] Copying: 52/256 [MB] (23 MBps) [2024-10-28T18:10:06.180Z] Copying: 75/256 [MB] (22 MBps) [2024-10-28T18:10:07.114Z] Copying: 97/256 [MB] (22 MBps) [2024-10-28T18:10:08.049Z] Copying: 119/256 [MB] (22 MBps) [2024-10-28T18:10:08.983Z] Copying: 142/256 [MB] (22 MBps) [2024-10-28T18:10:09.916Z] Copying: 165/256 [MB] (22 MBps) [2024-10-28T18:10:10.849Z] Copying: 188/256 [MB] (23 MBps) [2024-10-28T18:10:12.224Z] Copying: 211/256 [MB] (23 MBps) [2024-10-28T18:10:12.790Z] Copying: 235/256 [MB] (23 MBps) [2024-10-28T18:10:13.356Z] Copying: 256/256 [MB] (average 23 MBps)[2024-10-28 18:10:13.047008] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:56.878 [2024-10-28 18:10:13.065706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.065801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:56.878 [2024-10-28 18:10:13.065821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:56.878 [2024-10-28 18:10:13.065853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.065888] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:56.878 [2024-10-28 18:10:13.069376] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.069408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:56.878 [2024-10-28 18:10:13.069439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.466 ms 00:19:56.878 [2024-10-28 18:10:13.069450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.069820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.069839] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:56.878 [2024-10-28 18:10:13.069862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.329 ms 00:19:56.878 [2024-10-28 18:10:13.069876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.073654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.073710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:56.878 [2024-10-28 18:10:13.073741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.756 ms 00:19:56.878 [2024-10-28 18:10:13.073752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.080585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.080777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:56.878 [2024-10-28 18:10:13.080802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.809 ms 00:19:56.878 [2024-10-28 18:10:13.080813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.110443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.110634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:56.878 [2024-10-28 18:10:13.110677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.512 ms 00:19:56.878 [2024-10-28 18:10:13.110690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.127217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.127275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:56.878 [2024-10-28 18:10:13.127309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.459 ms 00:19:56.878 [2024-10-28 18:10:13.127325] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.127499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.127518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:56.878 [2024-10-28 18:10:13.127529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.110 ms 00:19:56.878 [2024-10-28 18:10:13.127539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.156463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.156500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:56.878 [2024-10-28 18:10:13.156531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.890 ms 00:19:56.878 [2024-10-28 18:10:13.156540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.186812] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.187038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:56.878 [2024-10-28 18:10:13.187066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.193 ms 00:19:56.878 [2024-10-28 18:10:13.187078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.219159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.219198] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:56.878 [2024-10-28 18:10:13.219214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.010 ms 00:19:56.878 [2024-10-28 18:10:13.219224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.249811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.878 [2024-10-28 18:10:13.249865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:56.878 [2024-10-28 18:10:13.249883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.496 ms 00:19:56.878 [2024-10-28 18:10:13.249895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.878 [2024-10-28 18:10:13.249961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:56.878 [2024-10-28 18:10:13.249986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:56.878 [2024-10-28 18:10:13.250233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250573] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.250994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:56.879 [2024-10-28 18:10:13.251264] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:56.879 [2024-10-28 18:10:13.251274] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0cdd57b4-e896-4e97-a9c9-3c575802f024 00:19:56.879 [2024-10-28 18:10:13.251285] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:56.879 [2024-10-28 18:10:13.251295] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:56.879 [2024-10-28 18:10:13.251305] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:56.879 [2024-10-28 18:10:13.251315] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:56.879 [2024-10-28 18:10:13.251326] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:56.879 [2024-10-28 18:10:13.251337] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:56.879 [2024-10-28 18:10:13.251347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:56.879 [2024-10-28 18:10:13.251357] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:56.879 [2024-10-28 18:10:13.251366] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:56.879 [2024-10-28 18:10:13.251377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.879 [2024-10-28 18:10:13.251394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:56.879 [2024-10-28 18:10:13.251406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.418 ms 00:19:56.879 [2024-10-28 18:10:13.251416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.879 [2024-10-28 18:10:13.268019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.879 [2024-10-28 18:10:13.268058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:56.879 [2024-10-28 18:10:13.268073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.578 ms 00:19:56.879 [2024-10-28 18:10:13.268083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.879 [2024-10-28 18:10:13.268520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.879 [2024-10-28 18:10:13.268550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:56.879 [2024-10-28 18:10:13.268564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:19:56.879 [2024-10-28 18:10:13.268574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.879 [2024-10-28 18:10:13.317194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.879 [2024-10-28 18:10:13.317241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:56.879 [2024-10-28 18:10:13.317257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.879 [2024-10-28 18:10:13.317268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.879 [2024-10-28 18:10:13.317383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.879 [2024-10-28 18:10:13.317400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:56.879 [2024-10-28 18:10:13.317411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.879 [2024-10-28 18:10:13.317421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.879 [2024-10-28 18:10:13.317482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.879 [2024-10-28 18:10:13.317500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:56.879 [2024-10-28 18:10:13.317512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.879 [2024-10-28 18:10:13.317538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.879 [2024-10-28 18:10:13.317561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:56.879 [2024-10-28 18:10:13.317580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:56.879 [2024-10-28 18:10:13.317591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:56.879 [2024-10-28 18:10:13.317601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.137 [2024-10-28 18:10:13.418088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.137 [2024-10-28 18:10:13.418413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.137 [2024-10-28 18:10:13.418442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.137 [2024-10-28 18:10:13.418456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.137 [2024-10-28 18:10:13.503404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.137 [2024-10-28 18:10:13.503486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.137 [2024-10-28 18:10:13.503519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.137 [2024-10-28 18:10:13.503529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.137 [2024-10-28 18:10:13.503605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.137 [2024-10-28 18:10:13.503622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.137 [2024-10-28 18:10:13.503632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.137 [2024-10-28 18:10:13.503642] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.137 [2024-10-28 18:10:13.503688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.137 [2024-10-28 18:10:13.503700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.137 [2024-10-28 18:10:13.503716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.137 [2024-10-28 18:10:13.503726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.137 [2024-10-28 18:10:13.503836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.137 [2024-10-28 18:10:13.503853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.137 [2024-10-28 18:10:13.503903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.137 [2024-10-28 18:10:13.503934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.137 [2024-10-28 18:10:13.503986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.137 [2024-10-28 18:10:13.504020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:57.137 [2024-10-28 18:10:13.504046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.137 [2024-10-28 18:10:13.504063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.137 [2024-10-28 18:10:13.504109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.137 [2024-10-28 18:10:13.504123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.137 [2024-10-28 18:10:13.504134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.137 [2024-10-28 18:10:13.504143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.137 [2024-10-28 18:10:13.504193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.137 [2024-10-28 18:10:13.504209] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.137 [2024-10-28 18:10:13.504225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.137 [2024-10-28 18:10:13.504235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.137 [2024-10-28 18:10:13.504436] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 438.752 ms, result 0 00:19:58.072 00:19:58.072 00:19:58.072 18:10:14 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:58.638 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:19:58.638 18:10:14 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:19:58.638 18:10:14 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:19:58.638 18:10:14 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:19:58.638 18:10:14 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:58.638 18:10:14 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:19:58.638 18:10:15 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:58.638 18:10:15 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 75866 00:19:58.638 Process with pid 75866 is not found 00:19:58.638 18:10:15 ftl.ftl_trim -- common/autotest_common.sh@952 -- # '[' -z 75866 ']' 00:19:58.638 18:10:15 ftl.ftl_trim -- common/autotest_common.sh@956 -- # kill -0 75866 00:19:58.638 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (75866) - No such process 00:19:58.638 18:10:15 ftl.ftl_trim -- common/autotest_common.sh@979 -- # echo 'Process with pid 75866 is not found' 00:19:58.638 00:19:58.638 real 1m11.161s 00:19:58.638 user 1m38.742s 00:19:58.638 sys 0m7.213s 00:19:58.638 18:10:15 ftl.ftl_trim -- common/autotest_common.sh@1128 -- # xtrace_disable 00:19:58.638 ************************************ 00:19:58.638 END TEST ftl_trim 00:19:58.638 ************************************ 00:19:58.638 18:10:15 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:58.638 18:10:15 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:58.638 18:10:15 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:19:58.638 18:10:15 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:19:58.638 18:10:15 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:58.638 ************************************ 00:19:58.638 START TEST ftl_restore 00:19:58.638 ************************************ 00:19:58.638 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:19:58.897 * Looking for test storage... 00:19:58.897 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:58.897 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:19:58.897 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lcov --version 00:19:58.897 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:19:58.897 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:58.897 18:10:15 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:19:58.897 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:58.897 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:19:58.897 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.897 --rc genhtml_branch_coverage=1 00:19:58.897 --rc genhtml_function_coverage=1 00:19:58.897 --rc genhtml_legend=1 00:19:58.897 --rc geninfo_all_blocks=1 00:19:58.897 --rc geninfo_unexecuted_blocks=1 00:19:58.897 00:19:58.898 ' 00:19:58.898 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:19:58.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.898 --rc genhtml_branch_coverage=1 00:19:58.898 --rc genhtml_function_coverage=1 00:19:58.898 --rc genhtml_legend=1 00:19:58.898 --rc geninfo_all_blocks=1 00:19:58.898 --rc geninfo_unexecuted_blocks=1 00:19:58.898 00:19:58.898 ' 00:19:58.898 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:19:58.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.898 --rc genhtml_branch_coverage=1 00:19:58.898 --rc genhtml_function_coverage=1 00:19:58.898 --rc genhtml_legend=1 00:19:58.898 --rc geninfo_all_blocks=1 00:19:58.898 --rc geninfo_unexecuted_blocks=1 00:19:58.898 00:19:58.898 ' 00:19:58.898 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:19:58.898 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:58.898 --rc genhtml_branch_coverage=1 00:19:58.898 --rc genhtml_function_coverage=1 00:19:58.898 --rc genhtml_legend=1 00:19:58.898 --rc geninfo_all_blocks=1 00:19:58.898 --rc geninfo_unexecuted_blocks=1 00:19:58.898 00:19:58.898 ' 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.PiTo8BJWKh 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=76139 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 76139 00:19:58.898 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@833 -- # '[' -z 76139 ']' 00:19:58.898 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.898 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@838 -- # local max_retries=100 00:19:58.898 18:10:15 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:58.898 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.898 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@842 -- # xtrace_disable 00:19:58.898 18:10:15 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:19:59.156 [2024-10-28 18:10:15.480557] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:19:59.156 [2024-10-28 18:10:15.480750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76139 ] 00:19:59.414 [2024-10-28 18:10:15.668941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.414 [2024-10-28 18:10:15.796411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.349 18:10:16 ftl.ftl_restore -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:00.349 18:10:16 ftl.ftl_restore -- common/autotest_common.sh@866 -- # return 0 00:20:00.349 18:10:16 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:00.349 18:10:16 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:00.349 18:10:16 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:00.349 18:10:16 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:00.349 18:10:16 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:00.349 18:10:16 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:00.607 18:10:16 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:00.607 18:10:16 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:00.607 18:10:16 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:00.607 18:10:16 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:20:00.607 18:10:16 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:00.607 18:10:16 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:00.607 18:10:16 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:00.607 18:10:16 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:00.866 18:10:17 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:00.866 { 00:20:00.866 "name": "nvme0n1", 00:20:00.866 "aliases": [ 00:20:00.866 "c8f957b1-394d-421c-a872-02aa4c54ce4a" 00:20:00.866 ], 00:20:00.866 "product_name": "NVMe disk", 00:20:00.866 "block_size": 4096, 00:20:00.866 "num_blocks": 1310720, 00:20:00.866 "uuid": "c8f957b1-394d-421c-a872-02aa4c54ce4a", 00:20:00.866 "numa_id": -1, 00:20:00.866 "assigned_rate_limits": { 00:20:00.866 "rw_ios_per_sec": 0, 00:20:00.866 "rw_mbytes_per_sec": 0, 00:20:00.866 "r_mbytes_per_sec": 0, 00:20:00.866 "w_mbytes_per_sec": 0 00:20:00.866 }, 00:20:00.866 "claimed": true, 00:20:00.866 "claim_type": "read_many_write_one", 00:20:00.866 "zoned": false, 00:20:00.866 "supported_io_types": { 00:20:00.866 "read": true, 00:20:00.866 "write": true, 00:20:00.866 "unmap": true, 00:20:00.866 "flush": true, 00:20:00.866 "reset": true, 00:20:00.866 "nvme_admin": true, 00:20:00.866 "nvme_io": true, 00:20:00.866 "nvme_io_md": false, 00:20:00.866 "write_zeroes": true, 00:20:00.866 "zcopy": false, 00:20:00.866 "get_zone_info": false, 00:20:00.866 "zone_management": false, 00:20:00.866 "zone_append": false, 00:20:00.866 "compare": true, 00:20:00.866 "compare_and_write": false, 00:20:00.866 "abort": true, 00:20:00.866 "seek_hole": false, 00:20:00.866 "seek_data": false, 00:20:00.866 "copy": true, 00:20:00.866 "nvme_iov_md": false 00:20:00.866 }, 00:20:00.866 "driver_specific": { 00:20:00.866 "nvme": [ 00:20:00.866 { 00:20:00.866 "pci_address": "0000:00:11.0", 00:20:00.866 "trid": { 00:20:00.866 "trtype": "PCIe", 00:20:00.866 "traddr": "0000:00:11.0" 00:20:00.866 }, 00:20:00.866 "ctrlr_data": { 00:20:00.866 "cntlid": 0, 00:20:00.866 "vendor_id": "0x1b36", 00:20:00.866 "model_number": "QEMU NVMe Ctrl", 00:20:00.866 "serial_number": "12341", 00:20:00.866 "firmware_revision": "8.0.0", 00:20:00.866 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:00.866 "oacs": { 00:20:00.866 "security": 0, 00:20:00.866 "format": 1, 00:20:00.866 "firmware": 0, 00:20:00.866 "ns_manage": 1 00:20:00.866 }, 00:20:00.866 "multi_ctrlr": false, 00:20:00.866 "ana_reporting": false 00:20:00.866 }, 00:20:00.866 "vs": { 00:20:00.866 "nvme_version": "1.4" 00:20:00.866 }, 00:20:00.866 "ns_data": { 00:20:00.866 "id": 1, 00:20:00.866 "can_share": false 00:20:00.866 } 00:20:00.866 } 00:20:00.866 ], 00:20:00.866 "mp_policy": "active_passive" 00:20:00.866 } 00:20:00.866 } 00:20:00.866 ]' 00:20:00.866 18:10:17 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:00.866 18:10:17 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:00.866 18:10:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:00.866 18:10:17 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=1310720 00:20:00.866 18:10:17 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:20:00.866 18:10:17 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 5120 00:20:00.866 18:10:17 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:00.866 18:10:17 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:00.866 18:10:17 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:01.124 18:10:17 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:01.124 18:10:17 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:01.124 18:10:17 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=b8826329-ef81-47b5-b6ce-887246647610 00:20:01.124 18:10:17 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:01.124 18:10:17 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b8826329-ef81-47b5-b6ce-887246647610 00:20:01.690 18:10:17 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:01.949 18:10:18 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=45c7aefe-3da6-4784-a5b0-d33c7d21a4fd 00:20:01.949 18:10:18 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 45c7aefe-3da6-4784-a5b0-d33c7d21a4fd 00:20:02.207 18:10:18 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:02.207 18:10:18 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:02.207 18:10:18 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:02.207 18:10:18 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:02.207 18:10:18 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:02.207 18:10:18 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:02.207 18:10:18 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:02.207 18:10:18 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:02.207 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:02.207 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:02.207 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:02.207 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:02.207 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:02.466 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:02.466 { 00:20:02.466 "name": "cc37eebb-1434-407f-88b5-cca61caf1b30", 00:20:02.466 "aliases": [ 00:20:02.466 "lvs/nvme0n1p0" 00:20:02.466 ], 00:20:02.466 "product_name": "Logical Volume", 00:20:02.466 "block_size": 4096, 00:20:02.466 "num_blocks": 26476544, 00:20:02.466 "uuid": "cc37eebb-1434-407f-88b5-cca61caf1b30", 00:20:02.466 "assigned_rate_limits": { 00:20:02.466 "rw_ios_per_sec": 0, 00:20:02.466 "rw_mbytes_per_sec": 0, 00:20:02.466 "r_mbytes_per_sec": 0, 00:20:02.466 "w_mbytes_per_sec": 0 00:20:02.466 }, 00:20:02.466 "claimed": false, 00:20:02.466 "zoned": false, 00:20:02.466 "supported_io_types": { 00:20:02.466 "read": true, 00:20:02.466 "write": true, 00:20:02.466 "unmap": true, 00:20:02.466 "flush": false, 00:20:02.466 "reset": true, 00:20:02.466 "nvme_admin": false, 00:20:02.466 "nvme_io": false, 00:20:02.466 "nvme_io_md": false, 00:20:02.466 "write_zeroes": true, 00:20:02.466 "zcopy": false, 00:20:02.466 "get_zone_info": false, 00:20:02.466 "zone_management": false, 00:20:02.466 "zone_append": false, 00:20:02.466 "compare": false, 00:20:02.466 "compare_and_write": false, 00:20:02.466 "abort": false, 00:20:02.466 "seek_hole": true, 00:20:02.466 "seek_data": true, 00:20:02.466 "copy": false, 00:20:02.466 "nvme_iov_md": false 00:20:02.466 }, 00:20:02.466 "driver_specific": { 00:20:02.466 "lvol": { 00:20:02.466 "lvol_store_uuid": "45c7aefe-3da6-4784-a5b0-d33c7d21a4fd", 00:20:02.466 "base_bdev": "nvme0n1", 00:20:02.466 "thin_provision": true, 00:20:02.466 "num_allocated_clusters": 0, 00:20:02.466 "snapshot": false, 00:20:02.466 "clone": false, 00:20:02.466 "esnap_clone": false 00:20:02.466 } 00:20:02.466 } 00:20:02.466 } 00:20:02.466 ]' 00:20:02.466 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:02.466 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:02.466 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:02.466 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:02.466 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:02.466 18:10:18 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:02.466 18:10:18 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:02.466 18:10:18 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:02.466 18:10:18 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:03.031 18:10:19 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:03.031 18:10:19 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:03.031 18:10:19 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:03.031 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:03.031 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:03.031 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:03.031 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:03.031 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:03.289 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:03.289 { 00:20:03.289 "name": "cc37eebb-1434-407f-88b5-cca61caf1b30", 00:20:03.289 "aliases": [ 00:20:03.289 "lvs/nvme0n1p0" 00:20:03.289 ], 00:20:03.289 "product_name": "Logical Volume", 00:20:03.289 "block_size": 4096, 00:20:03.290 "num_blocks": 26476544, 00:20:03.290 "uuid": "cc37eebb-1434-407f-88b5-cca61caf1b30", 00:20:03.290 "assigned_rate_limits": { 00:20:03.290 "rw_ios_per_sec": 0, 00:20:03.290 "rw_mbytes_per_sec": 0, 00:20:03.290 "r_mbytes_per_sec": 0, 00:20:03.290 "w_mbytes_per_sec": 0 00:20:03.290 }, 00:20:03.290 "claimed": false, 00:20:03.290 "zoned": false, 00:20:03.290 "supported_io_types": { 00:20:03.290 "read": true, 00:20:03.290 "write": true, 00:20:03.290 "unmap": true, 00:20:03.290 "flush": false, 00:20:03.290 "reset": true, 00:20:03.290 "nvme_admin": false, 00:20:03.290 "nvme_io": false, 00:20:03.290 "nvme_io_md": false, 00:20:03.290 "write_zeroes": true, 00:20:03.290 "zcopy": false, 00:20:03.290 "get_zone_info": false, 00:20:03.290 "zone_management": false, 00:20:03.290 "zone_append": false, 00:20:03.290 "compare": false, 00:20:03.290 "compare_and_write": false, 00:20:03.290 "abort": false, 00:20:03.290 "seek_hole": true, 00:20:03.290 "seek_data": true, 00:20:03.290 "copy": false, 00:20:03.290 "nvme_iov_md": false 00:20:03.290 }, 00:20:03.290 "driver_specific": { 00:20:03.290 "lvol": { 00:20:03.290 "lvol_store_uuid": "45c7aefe-3da6-4784-a5b0-d33c7d21a4fd", 00:20:03.290 "base_bdev": "nvme0n1", 00:20:03.290 "thin_provision": true, 00:20:03.290 "num_allocated_clusters": 0, 00:20:03.290 "snapshot": false, 00:20:03.290 "clone": false, 00:20:03.290 "esnap_clone": false 00:20:03.290 } 00:20:03.290 } 00:20:03.290 } 00:20:03.290 ]' 00:20:03.290 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:03.290 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:03.290 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:03.290 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:03.290 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:03.290 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:03.290 18:10:19 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:03.290 18:10:19 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:03.547 18:10:19 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:03.547 18:10:19 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:03.547 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1380 -- # local bdev_name=cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:03.547 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1381 -- # local bdev_info 00:20:03.547 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bs 00:20:03.547 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local nb 00:20:03.547 18:10:19 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b cc37eebb-1434-407f-88b5-cca61caf1b30 00:20:03.804 18:10:20 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:20:03.804 { 00:20:03.804 "name": "cc37eebb-1434-407f-88b5-cca61caf1b30", 00:20:03.804 "aliases": [ 00:20:03.804 "lvs/nvme0n1p0" 00:20:03.804 ], 00:20:03.804 "product_name": "Logical Volume", 00:20:03.804 "block_size": 4096, 00:20:03.804 "num_blocks": 26476544, 00:20:03.804 "uuid": "cc37eebb-1434-407f-88b5-cca61caf1b30", 00:20:03.804 "assigned_rate_limits": { 00:20:03.804 "rw_ios_per_sec": 0, 00:20:03.804 "rw_mbytes_per_sec": 0, 00:20:03.804 "r_mbytes_per_sec": 0, 00:20:03.804 "w_mbytes_per_sec": 0 00:20:03.804 }, 00:20:03.804 "claimed": false, 00:20:03.804 "zoned": false, 00:20:03.804 "supported_io_types": { 00:20:03.804 "read": true, 00:20:03.804 "write": true, 00:20:03.804 "unmap": true, 00:20:03.804 "flush": false, 00:20:03.804 "reset": true, 00:20:03.804 "nvme_admin": false, 00:20:03.804 "nvme_io": false, 00:20:03.804 "nvme_io_md": false, 00:20:03.804 "write_zeroes": true, 00:20:03.804 "zcopy": false, 00:20:03.804 "get_zone_info": false, 00:20:03.804 "zone_management": false, 00:20:03.804 "zone_append": false, 00:20:03.804 "compare": false, 00:20:03.804 "compare_and_write": false, 00:20:03.804 "abort": false, 00:20:03.804 "seek_hole": true, 00:20:03.804 "seek_data": true, 00:20:03.804 "copy": false, 00:20:03.804 "nvme_iov_md": false 00:20:03.804 }, 00:20:03.804 "driver_specific": { 00:20:03.804 "lvol": { 00:20:03.804 "lvol_store_uuid": "45c7aefe-3da6-4784-a5b0-d33c7d21a4fd", 00:20:03.804 "base_bdev": "nvme0n1", 00:20:03.804 "thin_provision": true, 00:20:03.804 "num_allocated_clusters": 0, 00:20:03.804 "snapshot": false, 00:20:03.804 "clone": false, 00:20:03.804 "esnap_clone": false 00:20:03.804 } 00:20:03.804 } 00:20:03.804 } 00:20:03.804 ]' 00:20:03.804 18:10:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:20:03.804 18:10:20 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # bs=4096 00:20:03.804 18:10:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:20:03.804 18:10:20 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # nb=26476544 00:20:03.804 18:10:20 ftl.ftl_restore -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:20:03.805 18:10:20 ftl.ftl_restore -- common/autotest_common.sh@1390 -- # echo 103424 00:20:03.805 18:10:20 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:03.805 18:10:20 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d cc37eebb-1434-407f-88b5-cca61caf1b30 --l2p_dram_limit 10' 00:20:03.805 18:10:20 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:03.805 18:10:20 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:03.805 18:10:20 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:03.805 18:10:20 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:03.805 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:03.805 18:10:20 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d cc37eebb-1434-407f-88b5-cca61caf1b30 --l2p_dram_limit 10 -c nvc0n1p0 00:20:04.064 [2024-10-28 18:10:20.510788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.064 [2024-10-28 18:10:20.510886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:04.064 [2024-10-28 18:10:20.510912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:04.064 [2024-10-28 18:10:20.510926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.064 [2024-10-28 18:10:20.511002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.064 [2024-10-28 18:10:20.511019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:04.064 [2024-10-28 18:10:20.511033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:20:04.064 [2024-10-28 18:10:20.511043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.064 [2024-10-28 18:10:20.511121] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:04.064 [2024-10-28 18:10:20.512113] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:04.064 [2024-10-28 18:10:20.512169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.064 [2024-10-28 18:10:20.512183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:04.064 [2024-10-28 18:10:20.512197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.075 ms 00:20:04.064 [2024-10-28 18:10:20.512209] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.064 [2024-10-28 18:10:20.512360] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID e26672bb-562c-40ba-bbba-bd2e0247fc2e 00:20:04.064 [2024-10-28 18:10:20.513307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.064 [2024-10-28 18:10:20.513365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:04.064 [2024-10-28 18:10:20.513381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:04.064 [2024-10-28 18:10:20.513404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.064 [2024-10-28 18:10:20.517557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.064 [2024-10-28 18:10:20.517616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:04.064 [2024-10-28 18:10:20.517647] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.100 ms 00:20:04.064 [2024-10-28 18:10:20.517660] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.064 [2024-10-28 18:10:20.517799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.064 [2024-10-28 18:10:20.517824] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:04.064 [2024-10-28 18:10:20.517838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:20:04.064 [2024-10-28 18:10:20.517870] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.064 [2024-10-28 18:10:20.517958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.064 [2024-10-28 18:10:20.517991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:04.064 [2024-10-28 18:10:20.518006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:04.064 [2024-10-28 18:10:20.518024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.064 [2024-10-28 18:10:20.518058] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:04.064 [2024-10-28 18:10:20.522371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.064 [2024-10-28 18:10:20.522423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:04.064 [2024-10-28 18:10:20.522465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.319 ms 00:20:04.064 [2024-10-28 18:10:20.522476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.064 [2024-10-28 18:10:20.522533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.064 [2024-10-28 18:10:20.522548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:04.064 [2024-10-28 18:10:20.522561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:04.064 [2024-10-28 18:10:20.522572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.064 [2024-10-28 18:10:20.522647] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:04.064 [2024-10-28 18:10:20.522799] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:04.064 [2024-10-28 18:10:20.522845] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:04.064 [2024-10-28 18:10:20.522865] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:04.064 [2024-10-28 18:10:20.522883] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:04.064 [2024-10-28 18:10:20.522896] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:04.064 [2024-10-28 18:10:20.522911] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:04.064 [2024-10-28 18:10:20.522922] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:04.064 [2024-10-28 18:10:20.522938] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:04.064 [2024-10-28 18:10:20.522960] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:04.064 [2024-10-28 18:10:20.522974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.064 [2024-10-28 18:10:20.522986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:04.064 [2024-10-28 18:10:20.523000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.331 ms 00:20:04.064 [2024-10-28 18:10:20.523023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.064 [2024-10-28 18:10:20.523123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.064 [2024-10-28 18:10:20.523144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:04.064 [2024-10-28 18:10:20.523160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:20:04.064 [2024-10-28 18:10:20.523171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.064 [2024-10-28 18:10:20.523284] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:04.064 [2024-10-28 18:10:20.523300] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:04.064 [2024-10-28 18:10:20.523315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:04.064 [2024-10-28 18:10:20.523326] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.064 [2024-10-28 18:10:20.523340] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:04.064 [2024-10-28 18:10:20.523351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:04.064 [2024-10-28 18:10:20.523364] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:04.064 [2024-10-28 18:10:20.523375] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:04.064 [2024-10-28 18:10:20.523388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:04.064 [2024-10-28 18:10:20.523399] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:04.064 [2024-10-28 18:10:20.523412] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:04.064 [2024-10-28 18:10:20.523423] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:04.064 [2024-10-28 18:10:20.523436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:04.064 [2024-10-28 18:10:20.523446] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:04.065 [2024-10-28 18:10:20.523459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:04.065 [2024-10-28 18:10:20.523470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.065 [2024-10-28 18:10:20.523485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:04.065 [2024-10-28 18:10:20.523496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:04.065 [2024-10-28 18:10:20.523508] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.065 [2024-10-28 18:10:20.523521] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:04.065 [2024-10-28 18:10:20.523536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:04.065 [2024-10-28 18:10:20.523547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.065 [2024-10-28 18:10:20.523560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:04.065 [2024-10-28 18:10:20.523571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:04.065 [2024-10-28 18:10:20.523584] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.065 [2024-10-28 18:10:20.523594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:04.065 [2024-10-28 18:10:20.523607] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:04.065 [2024-10-28 18:10:20.523617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.065 [2024-10-28 18:10:20.523630] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:04.065 [2024-10-28 18:10:20.523641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:04.065 [2024-10-28 18:10:20.523654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:04.065 [2024-10-28 18:10:20.523664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:04.065 [2024-10-28 18:10:20.523679] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:04.065 [2024-10-28 18:10:20.523689] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:04.065 [2024-10-28 18:10:20.523702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:04.065 [2024-10-28 18:10:20.523713] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:04.065 [2024-10-28 18:10:20.523726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:04.065 [2024-10-28 18:10:20.523737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:04.065 [2024-10-28 18:10:20.523749] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:04.065 [2024-10-28 18:10:20.523759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.065 [2024-10-28 18:10:20.523772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:04.065 [2024-10-28 18:10:20.523783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:04.065 [2024-10-28 18:10:20.523796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.065 [2024-10-28 18:10:20.523806] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:04.065 [2024-10-28 18:10:20.523820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:04.065 [2024-10-28 18:10:20.523831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:04.065 [2024-10-28 18:10:20.523862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:04.065 [2024-10-28 18:10:20.523875] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:04.065 [2024-10-28 18:10:20.523892] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:04.065 [2024-10-28 18:10:20.523903] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:04.065 [2024-10-28 18:10:20.523916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:04.065 [2024-10-28 18:10:20.523927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:04.065 [2024-10-28 18:10:20.523941] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:04.065 [2024-10-28 18:10:20.523957] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:04.065 [2024-10-28 18:10:20.523973] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:04.065 [2024-10-28 18:10:20.523988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:04.065 [2024-10-28 18:10:20.524003] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:04.065 [2024-10-28 18:10:20.524015] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:04.065 [2024-10-28 18:10:20.524028] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:04.065 [2024-10-28 18:10:20.524040] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:04.065 [2024-10-28 18:10:20.524053] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:04.065 [2024-10-28 18:10:20.524065] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:04.065 [2024-10-28 18:10:20.524079] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:04.065 [2024-10-28 18:10:20.524090] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:04.065 [2024-10-28 18:10:20.524105] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:04.065 [2024-10-28 18:10:20.524117] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:04.065 [2024-10-28 18:10:20.524130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:04.065 [2024-10-28 18:10:20.524141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:04.065 [2024-10-28 18:10:20.524155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:04.065 [2024-10-28 18:10:20.524168] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:04.065 [2024-10-28 18:10:20.524185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:04.065 [2024-10-28 18:10:20.524197] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:04.065 [2024-10-28 18:10:20.524211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:04.065 [2024-10-28 18:10:20.524223] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:04.065 [2024-10-28 18:10:20.524237] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:04.065 [2024-10-28 18:10:20.524249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:04.065 [2024-10-28 18:10:20.524263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:04.065 [2024-10-28 18:10:20.524275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:20:04.065 [2024-10-28 18:10:20.524289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:04.065 [2024-10-28 18:10:20.524341] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:04.065 [2024-10-28 18:10:20.524363] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:06.595 [2024-10-28 18:10:22.533215] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.595 [2024-10-28 18:10:22.533298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:06.595 [2024-10-28 18:10:22.533320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2008.886 ms 00:20:06.595 [2024-10-28 18:10:22.533336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.595 [2024-10-28 18:10:22.562315] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.562380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:06.596 [2024-10-28 18:10:22.562402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.716 ms 00:20:06.596 [2024-10-28 18:10:22.562417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.562597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.562631] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:06.596 [2024-10-28 18:10:22.562648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:20:06.596 [2024-10-28 18:10:22.562664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.600455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.600525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:06.596 [2024-10-28 18:10:22.600559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.730 ms 00:20:06.596 [2024-10-28 18:10:22.600572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.600623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.600646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:06.596 [2024-10-28 18:10:22.600658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:06.596 [2024-10-28 18:10:22.600671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.601165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.601215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:06.596 [2024-10-28 18:10:22.601231] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.350 ms 00:20:06.596 [2024-10-28 18:10:22.601261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.601395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.601429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:06.596 [2024-10-28 18:10:22.601445] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.107 ms 00:20:06.596 [2024-10-28 18:10:22.601462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.618976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.619031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:06.596 [2024-10-28 18:10:22.619097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.488 ms 00:20:06.596 [2024-10-28 18:10:22.619112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.632958] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:06.596 [2024-10-28 18:10:22.635851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.635898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:06.596 [2024-10-28 18:10:22.635919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.619 ms 00:20:06.596 [2024-10-28 18:10:22.635935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.701793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.701883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:06.596 [2024-10-28 18:10:22.701909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.810 ms 00:20:06.596 [2024-10-28 18:10:22.701923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.702214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.702247] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:06.596 [2024-10-28 18:10:22.702268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.229 ms 00:20:06.596 [2024-10-28 18:10:22.702281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.731397] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.731442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:06.596 [2024-10-28 18:10:22.731464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.018 ms 00:20:06.596 [2024-10-28 18:10:22.731477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.763897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.763951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:06.596 [2024-10-28 18:10:22.763973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.363 ms 00:20:06.596 [2024-10-28 18:10:22.763986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.764714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.764748] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:06.596 [2024-10-28 18:10:22.764766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:20:06.596 [2024-10-28 18:10:22.764778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.848452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.848518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:06.596 [2024-10-28 18:10:22.848560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.570 ms 00:20:06.596 [2024-10-28 18:10:22.848573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.880775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.880819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:06.596 [2024-10-28 18:10:22.880863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.086 ms 00:20:06.596 [2024-10-28 18:10:22.880877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.911146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.911185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:06.596 [2024-10-28 18:10:22.911220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.217 ms 00:20:06.596 [2024-10-28 18:10:22.911231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.941288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.941330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:06.596 [2024-10-28 18:10:22.941367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.005 ms 00:20:06.596 [2024-10-28 18:10:22.941395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.941477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.941510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:06.596 [2024-10-28 18:10:22.941544] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:06.596 [2024-10-28 18:10:22.941556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.941677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:06.596 [2024-10-28 18:10:22.941706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:06.596 [2024-10-28 18:10:22.941726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:20:06.596 [2024-10-28 18:10:22.941763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:06.596 [2024-10-28 18:10:22.942875] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2431.549 ms, result 0 00:20:06.596 { 00:20:06.596 "name": "ftl0", 00:20:06.596 "uuid": "e26672bb-562c-40ba-bbba-bd2e0247fc2e" 00:20:06.596 } 00:20:06.596 18:10:22 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:06.596 18:10:22 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:06.854 18:10:23 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:06.854 18:10:23 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:07.112 [2024-10-28 18:10:23.566526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.112 [2024-10-28 18:10:23.566590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:07.112 [2024-10-28 18:10:23.566626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:20:07.112 [2024-10-28 18:10:23.566651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.112 [2024-10-28 18:10:23.566687] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:07.112 [2024-10-28 18:10:23.569880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.112 [2024-10-28 18:10:23.569917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:07.112 [2024-10-28 18:10:23.569935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.165 ms 00:20:07.112 [2024-10-28 18:10:23.569947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.112 [2024-10-28 18:10:23.570261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.112 [2024-10-28 18:10:23.570287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:07.112 [2024-10-28 18:10:23.570307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.275 ms 00:20:07.112 [2024-10-28 18:10:23.570320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.112 [2024-10-28 18:10:23.573441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.112 [2024-10-28 18:10:23.573485] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:07.112 [2024-10-28 18:10:23.573517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.076 ms 00:20:07.112 [2024-10-28 18:10:23.573528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.112 [2024-10-28 18:10:23.579911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.112 [2024-10-28 18:10:23.579948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:07.112 [2024-10-28 18:10:23.579999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.356 ms 00:20:07.112 [2024-10-28 18:10:23.580010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.371 [2024-10-28 18:10:23.611190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.371 [2024-10-28 18:10:23.611237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:07.371 [2024-10-28 18:10:23.611258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.097 ms 00:20:07.371 [2024-10-28 18:10:23.611271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.371 [2024-10-28 18:10:23.628862] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.371 [2024-10-28 18:10:23.628907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:07.371 [2024-10-28 18:10:23.628927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.518 ms 00:20:07.371 [2024-10-28 18:10:23.628940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.371 [2024-10-28 18:10:23.629147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.371 [2024-10-28 18:10:23.629166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:07.371 [2024-10-28 18:10:23.629182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.154 ms 00:20:07.371 [2024-10-28 18:10:23.629193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.371 [2024-10-28 18:10:23.661037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.371 [2024-10-28 18:10:23.661090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:07.371 [2024-10-28 18:10:23.661112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.813 ms 00:20:07.371 [2024-10-28 18:10:23.661125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.371 [2024-10-28 18:10:23.692968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.371 [2024-10-28 18:10:23.693013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:07.371 [2024-10-28 18:10:23.693050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.773 ms 00:20:07.371 [2024-10-28 18:10:23.693062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.371 [2024-10-28 18:10:23.722739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.371 [2024-10-28 18:10:23.722779] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:07.371 [2024-10-28 18:10:23.722814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.604 ms 00:20:07.371 [2024-10-28 18:10:23.722825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.371 [2024-10-28 18:10:23.751330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.371 [2024-10-28 18:10:23.751371] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:07.371 [2024-10-28 18:10:23.751407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.378 ms 00:20:07.371 [2024-10-28 18:10:23.751419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.371 [2024-10-28 18:10:23.751468] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:07.371 [2024-10-28 18:10:23.751506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751722] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:07.371 [2024-10-28 18:10:23.751822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.751835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.751852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.751864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.751881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.751906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.751924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.751937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.751953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.751966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.751980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.751993] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752049] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752077] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752927] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752955] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.752984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:07.372 [2024-10-28 18:10:23.753007] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:07.372 [2024-10-28 18:10:23.753026] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e26672bb-562c-40ba-bbba-bd2e0247fc2e 00:20:07.372 [2024-10-28 18:10:23.753038] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:07.372 [2024-10-28 18:10:23.753054] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:07.372 [2024-10-28 18:10:23.753065] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:07.372 [2024-10-28 18:10:23.753083] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:07.372 [2024-10-28 18:10:23.753095] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:07.372 [2024-10-28 18:10:23.753109] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:07.372 [2024-10-28 18:10:23.753121] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:07.372 [2024-10-28 18:10:23.753134] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:07.372 [2024-10-28 18:10:23.753144] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:07.372 [2024-10-28 18:10:23.753159] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.372 [2024-10-28 18:10:23.753171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:07.372 [2024-10-28 18:10:23.753187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.694 ms 00:20:07.372 [2024-10-28 18:10:23.753199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.373 [2024-10-28 18:10:23.769209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.373 [2024-10-28 18:10:23.769249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:07.373 [2024-10-28 18:10:23.769284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.937 ms 00:20:07.373 [2024-10-28 18:10:23.769296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.373 [2024-10-28 18:10:23.769770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:07.373 [2024-10-28 18:10:23.769800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:07.373 [2024-10-28 18:10:23.769818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.438 ms 00:20:07.373 [2024-10-28 18:10:23.769860] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.373 [2024-10-28 18:10:23.825472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.373 [2024-10-28 18:10:23.825528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:07.373 [2024-10-28 18:10:23.825565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.373 [2024-10-28 18:10:23.825577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.373 [2024-10-28 18:10:23.825658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.373 [2024-10-28 18:10:23.825704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:07.373 [2024-10-28 18:10:23.825735] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.373 [2024-10-28 18:10:23.825761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.373 [2024-10-28 18:10:23.825933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.373 [2024-10-28 18:10:23.825964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:07.373 [2024-10-28 18:10:23.825982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.373 [2024-10-28 18:10:23.825995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.373 [2024-10-28 18:10:23.826029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.373 [2024-10-28 18:10:23.826044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:07.373 [2024-10-28 18:10:23.826058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.373 [2024-10-28 18:10:23.826069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.631 [2024-10-28 18:10:23.928640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.631 [2024-10-28 18:10:23.928695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:07.631 [2024-10-28 18:10:23.928748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.631 [2024-10-28 18:10:23.928761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.631 [2024-10-28 18:10:24.004728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.631 [2024-10-28 18:10:24.004792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:07.631 [2024-10-28 18:10:24.004828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.631 [2024-10-28 18:10:24.004842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.631 [2024-10-28 18:10:24.005009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.631 [2024-10-28 18:10:24.005028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:07.631 [2024-10-28 18:10:24.005057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.631 [2024-10-28 18:10:24.005085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.631 [2024-10-28 18:10:24.005177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.631 [2024-10-28 18:10:24.005195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:07.631 [2024-10-28 18:10:24.005210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.631 [2024-10-28 18:10:24.005222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.631 [2024-10-28 18:10:24.005368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.631 [2024-10-28 18:10:24.005397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:07.631 [2024-10-28 18:10:24.005415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.631 [2024-10-28 18:10:24.005427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.631 [2024-10-28 18:10:24.005489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.631 [2024-10-28 18:10:24.005508] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:07.631 [2024-10-28 18:10:24.005523] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.631 [2024-10-28 18:10:24.005534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.631 [2024-10-28 18:10:24.005584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.631 [2024-10-28 18:10:24.005602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:07.632 [2024-10-28 18:10:24.005616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.632 [2024-10-28 18:10:24.005628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.632 [2024-10-28 18:10:24.005689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:07.632 [2024-10-28 18:10:24.005715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:07.632 [2024-10-28 18:10:24.005731] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:07.632 [2024-10-28 18:10:24.005752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:07.632 [2024-10-28 18:10:24.005952] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 439.402 ms, result 0 00:20:07.632 true 00:20:07.632 18:10:24 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 76139 00:20:07.632 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76139 ']' 00:20:07.632 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76139 00:20:07.632 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@957 -- # uname 00:20:07.632 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:07.632 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 76139 00:20:07.632 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:07.632 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:07.632 killing process with pid 76139 00:20:07.632 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@970 -- # echo 'killing process with pid 76139' 00:20:07.632 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@971 -- # kill 76139 00:20:07.632 18:10:24 ftl.ftl_restore -- common/autotest_common.sh@976 -- # wait 76139 00:20:09.554 18:10:25 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:14.817 262144+0 records in 00:20:14.817 262144+0 records out 00:20:14.817 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.75451 s, 226 MB/s 00:20:14.817 18:10:30 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:16.714 18:10:32 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:16.714 [2024-10-28 18:10:33.049441] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:20:16.714 [2024-10-28 18:10:33.049653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76371 ] 00:20:16.971 [2024-10-28 18:10:33.252198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.971 [2024-10-28 18:10:33.377435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:17.229 [2024-10-28 18:10:33.699735] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.229 [2024-10-28 18:10:33.699859] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.488 [2024-10-28 18:10:33.867131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.488 [2024-10-28 18:10:33.867228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:17.488 [2024-10-28 18:10:33.867273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:17.488 [2024-10-28 18:10:33.867285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.488 [2024-10-28 18:10:33.867351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.488 [2024-10-28 18:10:33.867369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.488 [2024-10-28 18:10:33.867390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:20:17.488 [2024-10-28 18:10:33.867400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.488 [2024-10-28 18:10:33.867429] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:17.488 [2024-10-28 18:10:33.868435] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:17.488 [2024-10-28 18:10:33.868488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.488 [2024-10-28 18:10:33.868502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.488 [2024-10-28 18:10:33.868514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.066 ms 00:20:17.488 [2024-10-28 18:10:33.868526] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.488 [2024-10-28 18:10:33.869748] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:17.488 [2024-10-28 18:10:33.887107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.488 [2024-10-28 18:10:33.887196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:17.488 [2024-10-28 18:10:33.887212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.361 ms 00:20:17.488 [2024-10-28 18:10:33.887224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.488 [2024-10-28 18:10:33.887293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.488 [2024-10-28 18:10:33.887311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:17.488 [2024-10-28 18:10:33.887322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:20:17.488 [2024-10-28 18:10:33.887333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.488 [2024-10-28 18:10:33.891864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.488 [2024-10-28 18:10:33.891928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.488 [2024-10-28 18:10:33.891944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.446 ms 00:20:17.488 [2024-10-28 18:10:33.891956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.488 [2024-10-28 18:10:33.892071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.489 [2024-10-28 18:10:33.892091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.489 [2024-10-28 18:10:33.892103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.083 ms 00:20:17.489 [2024-10-28 18:10:33.892114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.489 [2024-10-28 18:10:33.892231] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.489 [2024-10-28 18:10:33.892249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.489 [2024-10-28 18:10:33.892261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:17.489 [2024-10-28 18:10:33.892272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.489 [2024-10-28 18:10:33.892305] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.489 [2024-10-28 18:10:33.896535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.489 [2024-10-28 18:10:33.896584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.489 [2024-10-28 18:10:33.896613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.239 ms 00:20:17.489 [2024-10-28 18:10:33.896633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.489 [2024-10-28 18:10:33.896668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.489 [2024-10-28 18:10:33.896681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.489 [2024-10-28 18:10:33.896692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:17.489 [2024-10-28 18:10:33.896702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.489 [2024-10-28 18:10:33.896743] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.489 [2024-10-28 18:10:33.896809] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.489 [2024-10-28 18:10:33.896866] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.489 [2024-10-28 18:10:33.896898] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.489 [2024-10-28 18:10:33.897012] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.489 [2024-10-28 18:10:33.897027] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.489 [2024-10-28 18:10:33.897041] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.489 [2024-10-28 18:10:33.897056] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.489 [2024-10-28 18:10:33.897087] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.489 [2024-10-28 18:10:33.897100] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:17.489 [2024-10-28 18:10:33.897111] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.489 [2024-10-28 18:10:33.897122] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.489 [2024-10-28 18:10:33.897132] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.489 [2024-10-28 18:10:33.897154] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.489 [2024-10-28 18:10:33.897165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.489 [2024-10-28 18:10:33.897177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.414 ms 00:20:17.489 [2024-10-28 18:10:33.897187] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.489 [2024-10-28 18:10:33.897281] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.489 [2024-10-28 18:10:33.897295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.489 [2024-10-28 18:10:33.897306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:17.489 [2024-10-28 18:10:33.897317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.489 [2024-10-28 18:10:33.897482] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.489 [2024-10-28 18:10:33.897513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.489 [2024-10-28 18:10:33.897526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.489 [2024-10-28 18:10:33.897536] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.489 [2024-10-28 18:10:33.897547] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.489 [2024-10-28 18:10:33.897557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.489 [2024-10-28 18:10:33.897567] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:17.489 [2024-10-28 18:10:33.897577] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.489 [2024-10-28 18:10:33.897588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:17.489 [2024-10-28 18:10:33.897597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.489 [2024-10-28 18:10:33.897607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.489 [2024-10-28 18:10:33.897617] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:17.489 [2024-10-28 18:10:33.897627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.489 [2024-10-28 18:10:33.897637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.489 [2024-10-28 18:10:33.897647] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:17.489 [2024-10-28 18:10:33.897675] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.489 [2024-10-28 18:10:33.897686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.489 [2024-10-28 18:10:33.897697] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:17.489 [2024-10-28 18:10:33.897707] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.489 [2024-10-28 18:10:33.897717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.489 [2024-10-28 18:10:33.897727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:17.489 [2024-10-28 18:10:33.897736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.489 [2024-10-28 18:10:33.897746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.489 [2024-10-28 18:10:33.897756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:17.489 [2024-10-28 18:10:33.897782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.489 [2024-10-28 18:10:33.897803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.489 [2024-10-28 18:10:33.897814] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:17.489 [2024-10-28 18:10:33.897824] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.489 [2024-10-28 18:10:33.897859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.489 [2024-10-28 18:10:33.897873] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:17.489 [2024-10-28 18:10:33.897883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.489 [2024-10-28 18:10:33.897893] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.489 [2024-10-28 18:10:33.897903] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:17.489 [2024-10-28 18:10:33.897913] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.489 [2024-10-28 18:10:33.897924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.489 [2024-10-28 18:10:33.897934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:17.489 [2024-10-28 18:10:33.897944] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.489 [2024-10-28 18:10:33.897954] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.489 [2024-10-28 18:10:33.897965] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:17.489 [2024-10-28 18:10:33.897974] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.489 [2024-10-28 18:10:33.897985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.489 [2024-10-28 18:10:33.897995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:17.489 [2024-10-28 18:10:33.898004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.489 [2024-10-28 18:10:33.898014] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.489 [2024-10-28 18:10:33.898026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.489 [2024-10-28 18:10:33.898037] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.489 [2024-10-28 18:10:33.898048] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.489 [2024-10-28 18:10:33.898059] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.489 [2024-10-28 18:10:33.898071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.489 [2024-10-28 18:10:33.898081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.489 [2024-10-28 18:10:33.898092] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.489 [2024-10-28 18:10:33.898102] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.489 [2024-10-28 18:10:33.898112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.489 [2024-10-28 18:10:33.898124] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.489 [2024-10-28 18:10:33.898138] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.489 [2024-10-28 18:10:33.898150] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:17.489 [2024-10-28 18:10:33.898161] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:17.489 [2024-10-28 18:10:33.898172] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:17.489 [2024-10-28 18:10:33.898183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:17.489 [2024-10-28 18:10:33.898193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:17.489 [2024-10-28 18:10:33.898219] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:17.489 [2024-10-28 18:10:33.898230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:17.489 [2024-10-28 18:10:33.898240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:17.489 [2024-10-28 18:10:33.898252] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:17.489 [2024-10-28 18:10:33.898262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:17.489 [2024-10-28 18:10:33.898274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:17.489 [2024-10-28 18:10:33.898284] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:17.489 [2024-10-28 18:10:33.898294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:17.490 [2024-10-28 18:10:33.898305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:17.490 [2024-10-28 18:10:33.898316] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.490 [2024-10-28 18:10:33.898340] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.490 [2024-10-28 18:10:33.898351] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.490 [2024-10-28 18:10:33.898363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.490 [2024-10-28 18:10:33.898373] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.490 [2024-10-28 18:10:33.898384] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.490 [2024-10-28 18:10:33.898396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.490 [2024-10-28 18:10:33.898407] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.490 [2024-10-28 18:10:33.898418] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.999 ms 00:20:17.490 [2024-10-28 18:10:33.898429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.490 [2024-10-28 18:10:33.930612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.490 [2024-10-28 18:10:33.930682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.490 [2024-10-28 18:10:33.930715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.119 ms 00:20:17.490 [2024-10-28 18:10:33.930726] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.490 [2024-10-28 18:10:33.930833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.490 [2024-10-28 18:10:33.930863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.490 [2024-10-28 18:10:33.930875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:20:17.490 [2024-10-28 18:10:33.930885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.748 [2024-10-28 18:10:33.976990] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.748 [2024-10-28 18:10:33.977057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.748 [2024-10-28 18:10:33.977105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.996 ms 00:20:17.748 [2024-10-28 18:10:33.977116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.748 [2024-10-28 18:10:33.977176] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.748 [2024-10-28 18:10:33.977192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.748 [2024-10-28 18:10:33.977204] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.748 [2024-10-28 18:10:33.977227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.748 [2024-10-28 18:10:33.977672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.748 [2024-10-28 18:10:33.977702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.748 [2024-10-28 18:10:33.977716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.319 ms 00:20:17.748 [2024-10-28 18:10:33.977727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.748 [2024-10-28 18:10:33.977929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.748 [2024-10-28 18:10:33.977952] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.748 [2024-10-28 18:10:33.977965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.172 ms 00:20:17.748 [2024-10-28 18:10:33.977989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.748 [2024-10-28 18:10:33.994592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.748 [2024-10-28 18:10:33.994645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.748 [2024-10-28 18:10:33.994685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.573 ms 00:20:17.748 [2024-10-28 18:10:33.994696] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.748 [2024-10-28 18:10:34.009424] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:17.748 [2024-10-28 18:10:34.009466] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.748 [2024-10-28 18:10:34.009498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.748 [2024-10-28 18:10:34.009509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.749 [2024-10-28 18:10:34.009521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.670 ms 00:20:17.749 [2024-10-28 18:10:34.009530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.040538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.040610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.749 [2024-10-28 18:10:34.040657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.962 ms 00:20:17.749 [2024-10-28 18:10:34.040668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.055819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.055895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:17.749 [2024-10-28 18:10:34.055926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.102 ms 00:20:17.749 [2024-10-28 18:10:34.055936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.070088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.070172] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:17.749 [2024-10-28 18:10:34.070202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.110 ms 00:20:17.749 [2024-10-28 18:10:34.070212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.071082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.071129] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.749 [2024-10-28 18:10:34.071143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.755 ms 00:20:17.749 [2024-10-28 18:10:34.071154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.147000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.147096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:17.749 [2024-10-28 18:10:34.147132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 75.813 ms 00:20:17.749 [2024-10-28 18:10:34.147173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.158303] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:17.749 [2024-10-28 18:10:34.160492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.160538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.749 [2024-10-28 18:10:34.160568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.243 ms 00:20:17.749 [2024-10-28 18:10:34.160579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.160682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.160701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:17.749 [2024-10-28 18:10:34.160713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:20:17.749 [2024-10-28 18:10:34.160723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.160867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.160901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.749 [2024-10-28 18:10:34.160915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:20:17.749 [2024-10-28 18:10:34.160926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.160958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.160972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.749 [2024-10-28 18:10:34.160984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:17.749 [2024-10-28 18:10:34.160994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.161042] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:17.749 [2024-10-28 18:10:34.161060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.161081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:17.749 [2024-10-28 18:10:34.161095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:20:17.749 [2024-10-28 18:10:34.161105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.188964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.189021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.749 [2024-10-28 18:10:34.189053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.835 ms 00:20:17.749 [2024-10-28 18:10:34.189064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.189161] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.749 [2024-10-28 18:10:34.189178] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.749 [2024-10-28 18:10:34.189190] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:17.749 [2024-10-28 18:10:34.189200] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.749 [2024-10-28 18:10:34.190496] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 322.756 ms, result 0 00:20:19.121  [2024-10-28T18:10:36.535Z] Copying: 24/1024 [MB] (24 MBps) [2024-10-28T18:10:37.469Z] Copying: 48/1024 [MB] (24 MBps) [2024-10-28T18:10:38.404Z] Copying: 73/1024 [MB] (24 MBps) [2024-10-28T18:10:39.362Z] Copying: 97/1024 [MB] (24 MBps) [2024-10-28T18:10:40.295Z] Copying: 122/1024 [MB] (24 MBps) [2024-10-28T18:10:41.231Z] Copying: 147/1024 [MB] (25 MBps) [2024-10-28T18:10:42.605Z] Copying: 173/1024 [MB] (25 MBps) [2024-10-28T18:10:43.540Z] Copying: 198/1024 [MB] (25 MBps) [2024-10-28T18:10:44.472Z] Copying: 224/1024 [MB] (25 MBps) [2024-10-28T18:10:45.405Z] Copying: 249/1024 [MB] (25 MBps) [2024-10-28T18:10:46.339Z] Copying: 273/1024 [MB] (24 MBps) [2024-10-28T18:10:47.275Z] Copying: 299/1024 [MB] (25 MBps) [2024-10-28T18:10:48.211Z] Copying: 323/1024 [MB] (24 MBps) [2024-10-28T18:10:49.585Z] Copying: 347/1024 [MB] (24 MBps) [2024-10-28T18:10:50.523Z] Copying: 372/1024 [MB] (24 MBps) [2024-10-28T18:10:51.455Z] Copying: 398/1024 [MB] (26 MBps) [2024-10-28T18:10:52.394Z] Copying: 424/1024 [MB] (25 MBps) [2024-10-28T18:10:53.371Z] Copying: 449/1024 [MB] (25 MBps) [2024-10-28T18:10:54.303Z] Copying: 475/1024 [MB] (25 MBps) [2024-10-28T18:10:55.236Z] Copying: 500/1024 [MB] (25 MBps) [2024-10-28T18:10:56.609Z] Copying: 525/1024 [MB] (24 MBps) [2024-10-28T18:10:57.543Z] Copying: 549/1024 [MB] (24 MBps) [2024-10-28T18:10:58.478Z] Copying: 574/1024 [MB] (25 MBps) [2024-10-28T18:10:59.412Z] Copying: 598/1024 [MB] (24 MBps) [2024-10-28T18:11:00.345Z] Copying: 624/1024 [MB] (25 MBps) [2024-10-28T18:11:01.280Z] Copying: 650/1024 [MB] (25 MBps) [2024-10-28T18:11:02.213Z] Copying: 675/1024 [MB] (25 MBps) [2024-10-28T18:11:03.604Z] Copying: 700/1024 [MB] (25 MBps) [2024-10-28T18:11:04.537Z] Copying: 725/1024 [MB] (24 MBps) [2024-10-28T18:11:05.473Z] Copying: 750/1024 [MB] (24 MBps) [2024-10-28T18:11:06.404Z] Copying: 774/1024 [MB] (24 MBps) [2024-10-28T18:11:07.338Z] Copying: 800/1024 [MB] (26 MBps) [2024-10-28T18:11:08.272Z] Copying: 826/1024 [MB] (26 MBps) [2024-10-28T18:11:09.206Z] Copying: 852/1024 [MB] (26 MBps) [2024-10-28T18:11:10.578Z] Copying: 879/1024 [MB] (26 MBps) [2024-10-28T18:11:11.511Z] Copying: 904/1024 [MB] (25 MBps) [2024-10-28T18:11:12.445Z] Copying: 929/1024 [MB] (24 MBps) [2024-10-28T18:11:13.377Z] Copying: 954/1024 [MB] (25 MBps) [2024-10-28T18:11:14.312Z] Copying: 979/1024 [MB] (24 MBps) [2024-10-28T18:11:15.247Z] Copying: 1005/1024 [MB] (26 MBps) [2024-10-28T18:11:15.247Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-10-28 18:11:14.925116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:14.925325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:58.769 [2024-10-28 18:11:14.925470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:58.769 [2024-10-28 18:11:14.925528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:14.925657] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:58.769 [2024-10-28 18:11:14.929290] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:14.929477] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:58.769 [2024-10-28 18:11:14.929593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.479 ms 00:20:58.769 [2024-10-28 18:11:14.929711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:14.931269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:14.931450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:58.769 [2024-10-28 18:11:14.931572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.479 ms 00:20:58.769 [2024-10-28 18:11:14.931623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:14.947588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:14.947782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:58.769 [2024-10-28 18:11:14.947928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.838 ms 00:20:58.769 [2024-10-28 18:11:14.947981] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:14.954953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:14.955119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:58.769 [2024-10-28 18:11:14.955251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.832 ms 00:20:58.769 [2024-10-28 18:11:14.955274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:14.987220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:14.987297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:58.769 [2024-10-28 18:11:14.987314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.873 ms 00:20:58.769 [2024-10-28 18:11:14.987326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:15.005090] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:15.005150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:58.769 [2024-10-28 18:11:15.005168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.719 ms 00:20:58.769 [2024-10-28 18:11:15.005180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:15.005402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:15.005422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:58.769 [2024-10-28 18:11:15.005444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.176 ms 00:20:58.769 [2024-10-28 18:11:15.005455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:15.035289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:15.035342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:58.769 [2024-10-28 18:11:15.035372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.813 ms 00:20:58.769 [2024-10-28 18:11:15.035383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:15.063888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:15.063939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:58.769 [2024-10-28 18:11:15.063983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.466 ms 00:20:58.769 [2024-10-28 18:11:15.063993] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:15.092051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:15.092103] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:58.769 [2024-10-28 18:11:15.092133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.018 ms 00:20:58.769 [2024-10-28 18:11:15.092143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:15.122443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.769 [2024-10-28 18:11:15.122500] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:58.769 [2024-10-28 18:11:15.122531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.220 ms 00:20:58.769 [2024-10-28 18:11:15.122542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.769 [2024-10-28 18:11:15.122583] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:58.769 [2024-10-28 18:11:15.122605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:58.769 [2024-10-28 18:11:15.122771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122951] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.122998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123076] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123111] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123122] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123168] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123214] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123226] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123468] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:58.770 [2024-10-28 18:11:15.123827] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:58.770 [2024-10-28 18:11:15.123859] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e26672bb-562c-40ba-bbba-bd2e0247fc2e 00:20:58.770 [2024-10-28 18:11:15.123871] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:58.770 [2024-10-28 18:11:15.123886] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:58.770 [2024-10-28 18:11:15.123896] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:58.770 [2024-10-28 18:11:15.123907] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:58.770 [2024-10-28 18:11:15.123917] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:58.770 [2024-10-28 18:11:15.123928] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:58.770 [2024-10-28 18:11:15.123939] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:58.770 [2024-10-28 18:11:15.123961] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:58.771 [2024-10-28 18:11:15.123971] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:58.771 [2024-10-28 18:11:15.123982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.771 [2024-10-28 18:11:15.123993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:58.771 [2024-10-28 18:11:15.124005] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.400 ms 00:20:58.771 [2024-10-28 18:11:15.124016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.771 [2024-10-28 18:11:15.140897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.771 [2024-10-28 18:11:15.140978] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:58.771 [2024-10-28 18:11:15.140995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.838 ms 00:20:58.771 [2024-10-28 18:11:15.141007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.771 [2024-10-28 18:11:15.141462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.771 [2024-10-28 18:11:15.141497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:58.771 [2024-10-28 18:11:15.141511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.427 ms 00:20:58.771 [2024-10-28 18:11:15.141522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.771 [2024-10-28 18:11:15.183194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.771 [2024-10-28 18:11:15.183259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.771 [2024-10-28 18:11:15.183290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.771 [2024-10-28 18:11:15.183301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.771 [2024-10-28 18:11:15.183364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.771 [2024-10-28 18:11:15.183379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.771 [2024-10-28 18:11:15.183390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.771 [2024-10-28 18:11:15.183401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.771 [2024-10-28 18:11:15.183506] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.771 [2024-10-28 18:11:15.183541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.771 [2024-10-28 18:11:15.183553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.771 [2024-10-28 18:11:15.183564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.771 [2024-10-28 18:11:15.183587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:58.771 [2024-10-28 18:11:15.183600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.771 [2024-10-28 18:11:15.183611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:58.771 [2024-10-28 18:11:15.183622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.029 [2024-10-28 18:11:15.280095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.029 [2024-10-28 18:11:15.280175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:59.029 [2024-10-28 18:11:15.280207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.029 [2024-10-28 18:11:15.280218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.029 [2024-10-28 18:11:15.360631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.029 [2024-10-28 18:11:15.360719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:59.029 [2024-10-28 18:11:15.360753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.029 [2024-10-28 18:11:15.360764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.029 [2024-10-28 18:11:15.360875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.029 [2024-10-28 18:11:15.360909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:59.029 [2024-10-28 18:11:15.360921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.029 [2024-10-28 18:11:15.360933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.029 [2024-10-28 18:11:15.360982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.029 [2024-10-28 18:11:15.360997] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:59.029 [2024-10-28 18:11:15.361009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.029 [2024-10-28 18:11:15.361019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.029 [2024-10-28 18:11:15.361142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.029 [2024-10-28 18:11:15.361175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:59.029 [2024-10-28 18:11:15.361188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.029 [2024-10-28 18:11:15.361199] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.029 [2024-10-28 18:11:15.361256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.029 [2024-10-28 18:11:15.361275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:59.029 [2024-10-28 18:11:15.361287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.029 [2024-10-28 18:11:15.361298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.029 [2024-10-28 18:11:15.361341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.029 [2024-10-28 18:11:15.361354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:59.029 [2024-10-28 18:11:15.361381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.029 [2024-10-28 18:11:15.361392] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.029 [2024-10-28 18:11:15.361442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:59.029 [2024-10-28 18:11:15.361458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:59.029 [2024-10-28 18:11:15.361469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:59.029 [2024-10-28 18:11:15.361480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:59.029 [2024-10-28 18:11:15.361637] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 436.489 ms, result 0 00:20:59.963 00:20:59.963 00:20:59.963 18:11:16 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:00.221 [2024-10-28 18:11:16.504453] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:21:00.222 [2024-10-28 18:11:16.504644] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76807 ] 00:21:00.222 [2024-10-28 18:11:16.684552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:00.480 [2024-10-28 18:11:16.783795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:00.738 [2024-10-28 18:11:17.089253] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:00.738 [2024-10-28 18:11:17.089354] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:00.998 [2024-10-28 18:11:17.248658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.248750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:00.998 [2024-10-28 18:11:17.248807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:00.998 [2024-10-28 18:11:17.248819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.248908] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.248928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:00.998 [2024-10-28 18:11:17.248944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:21:00.998 [2024-10-28 18:11:17.248955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.248987] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:00.998 [2024-10-28 18:11:17.249937] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:00.998 [2024-10-28 18:11:17.250000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.250014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:00.998 [2024-10-28 18:11:17.250027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.020 ms 00:21:00.998 [2024-10-28 18:11:17.250037] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.251170] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:00.998 [2024-10-28 18:11:17.267714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.267773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:00.998 [2024-10-28 18:11:17.267806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.561 ms 00:21:00.998 [2024-10-28 18:11:17.267818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.267912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.267932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:00.998 [2024-10-28 18:11:17.267944] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:21:00.998 [2024-10-28 18:11:17.267955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.272377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.272441] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:00.998 [2024-10-28 18:11:17.272457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.327 ms 00:21:00.998 [2024-10-28 18:11:17.272468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.272590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.272610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:00.998 [2024-10-28 18:11:17.272622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.081 ms 00:21:00.998 [2024-10-28 18:11:17.272632] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.272710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.272728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:00.998 [2024-10-28 18:11:17.272741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:21:00.998 [2024-10-28 18:11:17.272752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.272786] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:00.998 [2024-10-28 18:11:17.277133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.277185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:00.998 [2024-10-28 18:11:17.277199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.356 ms 00:21:00.998 [2024-10-28 18:11:17.277215] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.277267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.277280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:00.998 [2024-10-28 18:11:17.277292] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:00.998 [2024-10-28 18:11:17.277302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.277346] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:00.998 [2024-10-28 18:11:17.277390] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:00.998 [2024-10-28 18:11:17.277448] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:00.998 [2024-10-28 18:11:17.277471] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:00.998 [2024-10-28 18:11:17.277586] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:00.998 [2024-10-28 18:11:17.277601] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:00.998 [2024-10-28 18:11:17.277616] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:00.998 [2024-10-28 18:11:17.277630] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:00.998 [2024-10-28 18:11:17.277643] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:00.998 [2024-10-28 18:11:17.277655] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:00.998 [2024-10-28 18:11:17.277665] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:00.998 [2024-10-28 18:11:17.277676] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:00.998 [2024-10-28 18:11:17.277686] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:00.998 [2024-10-28 18:11:17.277702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.277713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:00.998 [2024-10-28 18:11:17.277724] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.359 ms 00:21:00.998 [2024-10-28 18:11:17.277734] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.277827] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.998 [2024-10-28 18:11:17.277841] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:00.998 [2024-10-28 18:11:17.277852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:21:00.998 [2024-10-28 18:11:17.277863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.998 [2024-10-28 18:11:17.278050] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:00.998 [2024-10-28 18:11:17.278085] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:00.998 [2024-10-28 18:11:17.278099] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.998 [2024-10-28 18:11:17.278111] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.998 [2024-10-28 18:11:17.278122] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:00.998 [2024-10-28 18:11:17.278132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:00.998 [2024-10-28 18:11:17.278143] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:00.998 [2024-10-28 18:11:17.278153] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:00.998 [2024-10-28 18:11:17.278163] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:00.998 [2024-10-28 18:11:17.278172] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.998 [2024-10-28 18:11:17.278183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:00.998 [2024-10-28 18:11:17.278192] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:00.998 [2024-10-28 18:11:17.278202] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:00.998 [2024-10-28 18:11:17.278212] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:00.998 [2024-10-28 18:11:17.278222] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:00.998 [2024-10-28 18:11:17.278244] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.998 [2024-10-28 18:11:17.278256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:00.998 [2024-10-28 18:11:17.278267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:00.998 [2024-10-28 18:11:17.278277] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.998 [2024-10-28 18:11:17.278288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:00.998 [2024-10-28 18:11:17.278297] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:00.998 [2024-10-28 18:11:17.278307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.998 [2024-10-28 18:11:17.278317] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:00.998 [2024-10-28 18:11:17.278327] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:00.998 [2024-10-28 18:11:17.278336] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.998 [2024-10-28 18:11:17.278346] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:00.999 [2024-10-28 18:11:17.278356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:00.999 [2024-10-28 18:11:17.278366] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.999 [2024-10-28 18:11:17.278376] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:00.999 [2024-10-28 18:11:17.278386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:00.999 [2024-10-28 18:11:17.278395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:00.999 [2024-10-28 18:11:17.278405] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:00.999 [2024-10-28 18:11:17.278415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:00.999 [2024-10-28 18:11:17.278424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.999 [2024-10-28 18:11:17.278434] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:00.999 [2024-10-28 18:11:17.278444] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:00.999 [2024-10-28 18:11:17.278454] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:00.999 [2024-10-28 18:11:17.278464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:00.999 [2024-10-28 18:11:17.278474] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:00.999 [2024-10-28 18:11:17.278483] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.999 [2024-10-28 18:11:17.278493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:00.999 [2024-10-28 18:11:17.278503] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:00.999 [2024-10-28 18:11:17.278512] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.999 [2024-10-28 18:11:17.278522] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:00.999 [2024-10-28 18:11:17.278533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:00.999 [2024-10-28 18:11:17.278543] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:00.999 [2024-10-28 18:11:17.278554] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:00.999 [2024-10-28 18:11:17.278565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:00.999 [2024-10-28 18:11:17.278577] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:00.999 [2024-10-28 18:11:17.278587] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:00.999 [2024-10-28 18:11:17.278597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:00.999 [2024-10-28 18:11:17.278606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:00.999 [2024-10-28 18:11:17.278617] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:00.999 [2024-10-28 18:11:17.278628] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:00.999 [2024-10-28 18:11:17.278642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.999 [2024-10-28 18:11:17.278654] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:00.999 [2024-10-28 18:11:17.278665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:00.999 [2024-10-28 18:11:17.278676] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:00.999 [2024-10-28 18:11:17.278687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:00.999 [2024-10-28 18:11:17.278698] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:00.999 [2024-10-28 18:11:17.278709] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:00.999 [2024-10-28 18:11:17.278719] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:00.999 [2024-10-28 18:11:17.278730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:00.999 [2024-10-28 18:11:17.278740] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:00.999 [2024-10-28 18:11:17.278751] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:00.999 [2024-10-28 18:11:17.278762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:00.999 [2024-10-28 18:11:17.278773] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:00.999 [2024-10-28 18:11:17.278783] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:00.999 [2024-10-28 18:11:17.278794] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:00.999 [2024-10-28 18:11:17.278805] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:00.999 [2024-10-28 18:11:17.278822] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:00.999 [2024-10-28 18:11:17.278849] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:00.999 [2024-10-28 18:11:17.278863] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:00.999 [2024-10-28 18:11:17.278874] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:00.999 [2024-10-28 18:11:17.278885] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:00.999 [2024-10-28 18:11:17.278897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.278908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:00.999 [2024-10-28 18:11:17.278920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.918 ms 00:21:00.999 [2024-10-28 18:11:17.278931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.312574] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.312650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:00.999 [2024-10-28 18:11:17.312684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.578 ms 00:21:00.999 [2024-10-28 18:11:17.312695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.312826] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.312842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:00.999 [2024-10-28 18:11:17.312854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.078 ms 00:21:00.999 [2024-10-28 18:11:17.312878] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.360299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.360375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:00.999 [2024-10-28 18:11:17.360410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.325 ms 00:21:00.999 [2024-10-28 18:11:17.360421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.360495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.360513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:00.999 [2024-10-28 18:11:17.360526] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:00.999 [2024-10-28 18:11:17.360557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.361006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.361041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:00.999 [2024-10-28 18:11:17.361056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:21:00.999 [2024-10-28 18:11:17.361068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.361227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.361265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:00.999 [2024-10-28 18:11:17.361279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:21:00.999 [2024-10-28 18:11:17.361298] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.378494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.378557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:00.999 [2024-10-28 18:11:17.378599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.167 ms 00:21:00.999 [2024-10-28 18:11:17.378610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.394513] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:00.999 [2024-10-28 18:11:17.394568] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:00.999 [2024-10-28 18:11:17.394601] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.394612] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:00.999 [2024-10-28 18:11:17.394624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.840 ms 00:21:00.999 [2024-10-28 18:11:17.394634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.423759] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.423823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:00.999 [2024-10-28 18:11:17.423852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.079 ms 00:21:00.999 [2024-10-28 18:11:17.423865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.438819] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.438898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:00.999 [2024-10-28 18:11:17.438929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.891 ms 00:21:00.999 [2024-10-28 18:11:17.438941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.453239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.453293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:00.999 [2024-10-28 18:11:17.453323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.254 ms 00:21:00.999 [2024-10-28 18:11:17.453333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:00.999 [2024-10-28 18:11:17.454270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:00.999 [2024-10-28 18:11:17.454338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:00.999 [2024-10-28 18:11:17.454369] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.802 ms 00:21:00.999 [2024-10-28 18:11:17.454404] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.259 [2024-10-28 18:11:17.529221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.259 [2024-10-28 18:11:17.529312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:01.259 [2024-10-28 18:11:17.529360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.783 ms 00:21:01.259 [2024-10-28 18:11:17.529372] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.259 [2024-10-28 18:11:17.541539] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:01.259 [2024-10-28 18:11:17.544118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.259 [2024-10-28 18:11:17.544166] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:01.259 [2024-10-28 18:11:17.544197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.674 ms 00:21:01.259 [2024-10-28 18:11:17.544208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.259 [2024-10-28 18:11:17.544317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.259 [2024-10-28 18:11:17.544337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:01.259 [2024-10-28 18:11:17.544360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:01.259 [2024-10-28 18:11:17.544375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.259 [2024-10-28 18:11:17.544484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.259 [2024-10-28 18:11:17.544505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:01.259 [2024-10-28 18:11:17.544518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:21:01.259 [2024-10-28 18:11:17.544528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.259 [2024-10-28 18:11:17.544560] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.259 [2024-10-28 18:11:17.544575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:01.259 [2024-10-28 18:11:17.544587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:01.259 [2024-10-28 18:11:17.544598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.259 [2024-10-28 18:11:17.544638] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:01.259 [2024-10-28 18:11:17.544658] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.259 [2024-10-28 18:11:17.544669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:01.259 [2024-10-28 18:11:17.544680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:21:01.259 [2024-10-28 18:11:17.544691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.259 [2024-10-28 18:11:17.574888] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.259 [2024-10-28 18:11:17.574943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:01.259 [2024-10-28 18:11:17.574976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.173 ms 00:21:01.259 [2024-10-28 18:11:17.574992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.259 [2024-10-28 18:11:17.575078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:01.259 [2024-10-28 18:11:17.575097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:01.259 [2024-10-28 18:11:17.575110] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:21:01.259 [2024-10-28 18:11:17.575120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:01.259 [2024-10-28 18:11:17.576353] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 327.135 ms, result 0 00:21:02.635  [2024-10-28T18:11:20.049Z] Copying: 24/1024 [MB] (24 MBps) [2024-10-28T18:11:20.985Z] Copying: 49/1024 [MB] (24 MBps) [2024-10-28T18:11:21.920Z] Copying: 75/1024 [MB] (25 MBps) [2024-10-28T18:11:22.866Z] Copying: 100/1024 [MB] (25 MBps) [2024-10-28T18:11:23.801Z] Copying: 126/1024 [MB] (25 MBps) [2024-10-28T18:11:25.176Z] Copying: 152/1024 [MB] (25 MBps) [2024-10-28T18:11:26.110Z] Copying: 177/1024 [MB] (24 MBps) [2024-10-28T18:11:27.043Z] Copying: 203/1024 [MB] (26 MBps) [2024-10-28T18:11:27.986Z] Copying: 229/1024 [MB] (26 MBps) [2024-10-28T18:11:28.946Z] Copying: 254/1024 [MB] (25 MBps) [2024-10-28T18:11:29.881Z] Copying: 279/1024 [MB] (25 MBps) [2024-10-28T18:11:30.814Z] Copying: 305/1024 [MB] (25 MBps) [2024-10-28T18:11:32.188Z] Copying: 331/1024 [MB] (25 MBps) [2024-10-28T18:11:33.123Z] Copying: 355/1024 [MB] (24 MBps) [2024-10-28T18:11:34.068Z] Copying: 380/1024 [MB] (24 MBps) [2024-10-28T18:11:35.002Z] Copying: 406/1024 [MB] (26 MBps) [2024-10-28T18:11:35.940Z] Copying: 433/1024 [MB] (26 MBps) [2024-10-28T18:11:36.876Z] Copying: 459/1024 [MB] (26 MBps) [2024-10-28T18:11:37.812Z] Copying: 485/1024 [MB] (25 MBps) [2024-10-28T18:11:39.191Z] Copying: 510/1024 [MB] (25 MBps) [2024-10-28T18:11:40.127Z] Copying: 536/1024 [MB] (25 MBps) [2024-10-28T18:11:41.063Z] Copying: 562/1024 [MB] (26 MBps) [2024-10-28T18:11:41.997Z] Copying: 589/1024 [MB] (26 MBps) [2024-10-28T18:11:42.928Z] Copying: 616/1024 [MB] (26 MBps) [2024-10-28T18:11:43.863Z] Copying: 642/1024 [MB] (26 MBps) [2024-10-28T18:11:44.799Z] Copying: 668/1024 [MB] (25 MBps) [2024-10-28T18:11:46.171Z] Copying: 694/1024 [MB] (26 MBps) [2024-10-28T18:11:47.105Z] Copying: 721/1024 [MB] (26 MBps) [2024-10-28T18:11:48.040Z] Copying: 747/1024 [MB] (26 MBps) [2024-10-28T18:11:48.988Z] Copying: 774/1024 [MB] (26 MBps) [2024-10-28T18:11:49.921Z] Copying: 800/1024 [MB] (25 MBps) [2024-10-28T18:11:50.854Z] Copying: 826/1024 [MB] (25 MBps) [2024-10-28T18:11:52.225Z] Copying: 851/1024 [MB] (25 MBps) [2024-10-28T18:11:53.158Z] Copying: 877/1024 [MB] (25 MBps) [2024-10-28T18:11:54.092Z] Copying: 902/1024 [MB] (25 MBps) [2024-10-28T18:11:55.025Z] Copying: 927/1024 [MB] (25 MBps) [2024-10-28T18:11:55.959Z] Copying: 953/1024 [MB] (25 MBps) [2024-10-28T18:11:56.893Z] Copying: 979/1024 [MB] (26 MBps) [2024-10-28T18:11:57.831Z] Copying: 1005/1024 [MB] (26 MBps) [2024-10-28T18:11:58.396Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-10-28 18:11:58.379182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.918 [2024-10-28 18:11:58.379263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:41.918 [2024-10-28 18:11:58.379285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:41.918 [2024-10-28 18:11:58.379296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.918 [2024-10-28 18:11:58.379333] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:41.918 [2024-10-28 18:11:58.383449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.918 [2024-10-28 18:11:58.383506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:41.918 [2024-10-28 18:11:58.383551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.092 ms 00:21:41.918 [2024-10-28 18:11:58.383562] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.918 [2024-10-28 18:11:58.383873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.918 [2024-10-28 18:11:58.383906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:41.918 [2024-10-28 18:11:58.383920] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:21:41.918 [2024-10-28 18:11:58.383943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.918 [2024-10-28 18:11:58.388097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.918 [2024-10-28 18:11:58.388152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:41.918 [2024-10-28 18:11:58.388166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.134 ms 00:21:41.918 [2024-10-28 18:11:58.388177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:41.918 [2024-10-28 18:11:58.394693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:41.918 [2024-10-28 18:11:58.394747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:41.918 [2024-10-28 18:11:58.394778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.475 ms 00:21:41.918 [2024-10-28 18:11:58.394788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.178 [2024-10-28 18:11:58.425015] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.178 [2024-10-28 18:11:58.425104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:42.178 [2024-10-28 18:11:58.425138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.119 ms 00:21:42.178 [2024-10-28 18:11:58.425150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.178 [2024-10-28 18:11:58.443287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.178 [2024-10-28 18:11:58.443361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:42.178 [2024-10-28 18:11:58.443395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.051 ms 00:21:42.178 [2024-10-28 18:11:58.443428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.178 [2024-10-28 18:11:58.443623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.178 [2024-10-28 18:11:58.443661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:42.178 [2024-10-28 18:11:58.443674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.141 ms 00:21:42.178 [2024-10-28 18:11:58.443686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.178 [2024-10-28 18:11:58.472930] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.178 [2024-10-28 18:11:58.473042] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:42.178 [2024-10-28 18:11:58.473076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.219 ms 00:21:42.178 [2024-10-28 18:11:58.473087] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.178 [2024-10-28 18:11:58.502089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.178 [2024-10-28 18:11:58.502231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:42.178 [2024-10-28 18:11:58.502266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.901 ms 00:21:42.178 [2024-10-28 18:11:58.502277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.178 [2024-10-28 18:11:58.530697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.178 [2024-10-28 18:11:58.530786] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:42.178 [2024-10-28 18:11:58.530820] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.328 ms 00:21:42.178 [2024-10-28 18:11:58.530831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.178 [2024-10-28 18:11:58.559343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.178 [2024-10-28 18:11:58.559437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:42.178 [2024-10-28 18:11:58.559472] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.339 ms 00:21:42.178 [2024-10-28 18:11:58.559482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.178 [2024-10-28 18:11:58.559576] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:42.178 [2024-10-28 18:11:58.559601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559677] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559745] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559799] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:42.178 [2024-10-28 18:11:58.559974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.559984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.559995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560200] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560220] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560711] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:42.179 [2024-10-28 18:11:58.560762] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:42.179 [2024-10-28 18:11:58.560787] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e26672bb-562c-40ba-bbba-bd2e0247fc2e 00:21:42.179 [2024-10-28 18:11:58.560798] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:42.179 [2024-10-28 18:11:58.560808] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:42.179 [2024-10-28 18:11:58.560818] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:42.179 [2024-10-28 18:11:58.560829] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:42.179 [2024-10-28 18:11:58.560853] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:42.179 [2024-10-28 18:11:58.560865] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:42.179 [2024-10-28 18:11:58.560895] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:42.179 [2024-10-28 18:11:58.560905] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:42.179 [2024-10-28 18:11:58.560916] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:42.179 [2024-10-28 18:11:58.560927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.179 [2024-10-28 18:11:58.560937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:42.179 [2024-10-28 18:11:58.560949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.352 ms 00:21:42.179 [2024-10-28 18:11:58.560959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.179 [2024-10-28 18:11:58.576479] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.179 [2024-10-28 18:11:58.576555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:42.179 [2024-10-28 18:11:58.576588] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.441 ms 00:21:42.179 [2024-10-28 18:11:58.576599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.179 [2024-10-28 18:11:58.577092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:42.179 [2024-10-28 18:11:58.577121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:42.179 [2024-10-28 18:11:58.577135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.446 ms 00:21:42.179 [2024-10-28 18:11:58.577165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.179 [2024-10-28 18:11:58.617758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.179 [2024-10-28 18:11:58.617819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:42.179 [2024-10-28 18:11:58.617861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.179 [2024-10-28 18:11:58.617875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.179 [2024-10-28 18:11:58.617951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.179 [2024-10-28 18:11:58.617989] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:42.180 [2024-10-28 18:11:58.618001] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.180 [2024-10-28 18:11:58.618024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.180 [2024-10-28 18:11:58.618188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.180 [2024-10-28 18:11:58.618210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:42.180 [2024-10-28 18:11:58.618224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.180 [2024-10-28 18:11:58.618234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.180 [2024-10-28 18:11:58.618258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.180 [2024-10-28 18:11:58.618272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:42.180 [2024-10-28 18:11:58.618283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.180 [2024-10-28 18:11:58.618293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.438 [2024-10-28 18:11:58.720445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.438 [2024-10-28 18:11:58.720528] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:42.438 [2024-10-28 18:11:58.720561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.438 [2024-10-28 18:11:58.720572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.438 [2024-10-28 18:11:58.798385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.438 [2024-10-28 18:11:58.798464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:42.438 [2024-10-28 18:11:58.798507] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.438 [2024-10-28 18:11:58.798518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.438 [2024-10-28 18:11:58.798647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.438 [2024-10-28 18:11:58.798665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:42.438 [2024-10-28 18:11:58.798677] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.438 [2024-10-28 18:11:58.798694] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.438 [2024-10-28 18:11:58.798756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.438 [2024-10-28 18:11:58.798771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:42.438 [2024-10-28 18:11:58.798783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.438 [2024-10-28 18:11:58.798794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.438 [2024-10-28 18:11:58.798943] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.438 [2024-10-28 18:11:58.798964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:42.438 [2024-10-28 18:11:58.798975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.438 [2024-10-28 18:11:58.798985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.438 [2024-10-28 18:11:58.799038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.438 [2024-10-28 18:11:58.799057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:42.438 [2024-10-28 18:11:58.799068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.438 [2024-10-28 18:11:58.799079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.438 [2024-10-28 18:11:58.799142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.438 [2024-10-28 18:11:58.799170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:42.438 [2024-10-28 18:11:58.799183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.438 [2024-10-28 18:11:58.799193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.438 [2024-10-28 18:11:58.799261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:42.438 [2024-10-28 18:11:58.799284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:42.438 [2024-10-28 18:11:58.799296] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:42.438 [2024-10-28 18:11:58.799307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:42.438 [2024-10-28 18:11:58.799493] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 420.276 ms, result 0 00:21:43.381 00:21:43.381 00:21:43.381 18:11:59 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:21:45.910 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:21:45.910 18:12:01 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:21:45.910 [2024-10-28 18:12:01.984560] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:21:45.910 [2024-10-28 18:12:01.984755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77263 ] 00:21:45.910 [2024-10-28 18:12:02.156386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:45.910 [2024-10-28 18:12:02.257663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:46.167 [2024-10-28 18:12:02.570795] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:46.167 [2024-10-28 18:12:02.570916] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:46.427 [2024-10-28 18:12:02.732600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.427 [2024-10-28 18:12:02.732675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:46.427 [2024-10-28 18:12:02.732719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:21:46.427 [2024-10-28 18:12:02.732730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.427 [2024-10-28 18:12:02.732796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.427 [2024-10-28 18:12:02.732814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:46.427 [2024-10-28 18:12:02.732830] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:46.427 [2024-10-28 18:12:02.732841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.427 [2024-10-28 18:12:02.732895] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:46.427 [2024-10-28 18:12:02.733819] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:46.427 [2024-10-28 18:12:02.733881] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.427 [2024-10-28 18:12:02.733895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:46.427 [2024-10-28 18:12:02.733908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:21:46.427 [2024-10-28 18:12:02.733920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.427 [2024-10-28 18:12:02.735221] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:46.427 [2024-10-28 18:12:02.751122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.427 [2024-10-28 18:12:02.751180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:46.427 [2024-10-28 18:12:02.751213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.903 ms 00:21:46.427 [2024-10-28 18:12:02.751224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.428 [2024-10-28 18:12:02.751302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.428 [2024-10-28 18:12:02.751321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:46.428 [2024-10-28 18:12:02.751333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:21:46.428 [2024-10-28 18:12:02.751343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.428 [2024-10-28 18:12:02.755976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.428 [2024-10-28 18:12:02.756038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:46.428 [2024-10-28 18:12:02.756069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.509 ms 00:21:46.428 [2024-10-28 18:12:02.756080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.428 [2024-10-28 18:12:02.756219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.428 [2024-10-28 18:12:02.756241] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:46.428 [2024-10-28 18:12:02.756254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:21:46.428 [2024-10-28 18:12:02.756264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.428 [2024-10-28 18:12:02.756361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.428 [2024-10-28 18:12:02.756379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:46.428 [2024-10-28 18:12:02.756392] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:46.428 [2024-10-28 18:12:02.756403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.428 [2024-10-28 18:12:02.756437] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:46.428 [2024-10-28 18:12:02.760605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.428 [2024-10-28 18:12:02.760658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:46.428 [2024-10-28 18:12:02.760689] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.177 ms 00:21:46.428 [2024-10-28 18:12:02.760704] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.428 [2024-10-28 18:12:02.760741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.428 [2024-10-28 18:12:02.760756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:46.428 [2024-10-28 18:12:02.760768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:46.428 [2024-10-28 18:12:02.760777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.428 [2024-10-28 18:12:02.760823] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:46.428 [2024-10-28 18:12:02.760882] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:46.428 [2024-10-28 18:12:02.760944] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:46.428 [2024-10-28 18:12:02.760968] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:46.428 [2024-10-28 18:12:02.761082] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:46.428 [2024-10-28 18:12:02.761097] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:46.428 [2024-10-28 18:12:02.761111] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:46.428 [2024-10-28 18:12:02.761127] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:46.428 [2024-10-28 18:12:02.761140] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:46.428 [2024-10-28 18:12:02.761152] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:46.428 [2024-10-28 18:12:02.761163] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:46.428 [2024-10-28 18:12:02.761173] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:46.428 [2024-10-28 18:12:02.761184] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:46.428 [2024-10-28 18:12:02.761200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.428 [2024-10-28 18:12:02.761212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:46.428 [2024-10-28 18:12:02.761223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.381 ms 00:21:46.428 [2024-10-28 18:12:02.761234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.428 [2024-10-28 18:12:02.761329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.428 [2024-10-28 18:12:02.761344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:46.428 [2024-10-28 18:12:02.761356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:21:46.428 [2024-10-28 18:12:02.761366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.428 [2024-10-28 18:12:02.761513] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:46.428 [2024-10-28 18:12:02.761541] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:46.428 [2024-10-28 18:12:02.761555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:46.428 [2024-10-28 18:12:02.761566] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.428 [2024-10-28 18:12:02.761578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:46.428 [2024-10-28 18:12:02.761588] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:46.428 [2024-10-28 18:12:02.761599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:46.428 [2024-10-28 18:12:02.761609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:46.428 [2024-10-28 18:12:02.761619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:46.428 [2024-10-28 18:12:02.761630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:46.428 [2024-10-28 18:12:02.761640] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:46.428 [2024-10-28 18:12:02.761650] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:46.428 [2024-10-28 18:12:02.761660] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:46.428 [2024-10-28 18:12:02.761670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:46.428 [2024-10-28 18:12:02.761680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:46.428 [2024-10-28 18:12:02.761703] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.428 [2024-10-28 18:12:02.761715] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:46.428 [2024-10-28 18:12:02.761726] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:46.428 [2024-10-28 18:12:02.761736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.428 [2024-10-28 18:12:02.761746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:46.428 [2024-10-28 18:12:02.761756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:46.428 [2024-10-28 18:12:02.761766] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.428 [2024-10-28 18:12:02.761776] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:46.428 [2024-10-28 18:12:02.761786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:46.428 [2024-10-28 18:12:02.761796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.428 [2024-10-28 18:12:02.761807] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:46.428 [2024-10-28 18:12:02.761817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:46.428 [2024-10-28 18:12:02.761827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.428 [2024-10-28 18:12:02.761837] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:46.428 [2024-10-28 18:12:02.761847] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:46.428 [2024-10-28 18:12:02.761876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:46.428 [2024-10-28 18:12:02.761887] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:46.428 [2024-10-28 18:12:02.761904] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:46.428 [2024-10-28 18:12:02.761914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:46.428 [2024-10-28 18:12:02.761924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:46.428 [2024-10-28 18:12:02.761934] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:46.428 [2024-10-28 18:12:02.761943] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:46.429 [2024-10-28 18:12:02.761953] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:46.429 [2024-10-28 18:12:02.761964] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:46.429 [2024-10-28 18:12:02.761973] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.429 [2024-10-28 18:12:02.761984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:46.429 [2024-10-28 18:12:02.761994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:46.429 [2024-10-28 18:12:02.762004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.429 [2024-10-28 18:12:02.762014] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:46.429 [2024-10-28 18:12:02.762026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:46.429 [2024-10-28 18:12:02.762036] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:46.429 [2024-10-28 18:12:02.762047] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:46.429 [2024-10-28 18:12:02.762058] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:46.429 [2024-10-28 18:12:02.762070] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:46.429 [2024-10-28 18:12:02.762080] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:46.429 [2024-10-28 18:12:02.762090] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:46.429 [2024-10-28 18:12:02.762100] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:46.429 [2024-10-28 18:12:02.762110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:46.429 [2024-10-28 18:12:02.762122] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:46.429 [2024-10-28 18:12:02.762136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:46.429 [2024-10-28 18:12:02.762148] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:46.429 [2024-10-28 18:12:02.762170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:46.429 [2024-10-28 18:12:02.762183] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:46.429 [2024-10-28 18:12:02.762194] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:46.429 [2024-10-28 18:12:02.762205] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:46.429 [2024-10-28 18:12:02.762216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:46.429 [2024-10-28 18:12:02.762227] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:46.429 [2024-10-28 18:12:02.762239] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:46.429 [2024-10-28 18:12:02.762250] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:46.429 [2024-10-28 18:12:02.762261] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:46.429 [2024-10-28 18:12:02.762272] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:46.429 [2024-10-28 18:12:02.762283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:46.429 [2024-10-28 18:12:02.762294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:46.429 [2024-10-28 18:12:02.762305] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:46.429 [2024-10-28 18:12:02.762317] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:46.429 [2024-10-28 18:12:02.762335] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:46.429 [2024-10-28 18:12:02.762347] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:46.429 [2024-10-28 18:12:02.762359] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:46.429 [2024-10-28 18:12:02.762370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:46.429 [2024-10-28 18:12:02.762382] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:46.429 [2024-10-28 18:12:02.762394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.429 [2024-10-28 18:12:02.762405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:46.429 [2024-10-28 18:12:02.762416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.951 ms 00:21:46.429 [2024-10-28 18:12:02.762427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.429 [2024-10-28 18:12:02.796439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.429 [2024-10-28 18:12:02.796556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:46.429 [2024-10-28 18:12:02.796609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.950 ms 00:21:46.429 [2024-10-28 18:12:02.796621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.429 [2024-10-28 18:12:02.796745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.429 [2024-10-28 18:12:02.796761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:46.429 [2024-10-28 18:12:02.796774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:21:46.429 [2024-10-28 18:12:02.796785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.429 [2024-10-28 18:12:02.858399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.429 [2024-10-28 18:12:02.858488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:46.429 [2024-10-28 18:12:02.858535] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.504 ms 00:21:46.429 [2024-10-28 18:12:02.858553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.429 [2024-10-28 18:12:02.858664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.429 [2024-10-28 18:12:02.858689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:46.429 [2024-10-28 18:12:02.858714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:46.429 [2024-10-28 18:12:02.858772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.429 [2024-10-28 18:12:02.859299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.429 [2024-10-28 18:12:02.859361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:46.429 [2024-10-28 18:12:02.859387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.370 ms 00:21:46.429 [2024-10-28 18:12:02.859407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.429 [2024-10-28 18:12:02.859637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.429 [2024-10-28 18:12:02.859696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:46.429 [2024-10-28 18:12:02.859721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:21:46.429 [2024-10-28 18:12:02.859750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.429 [2024-10-28 18:12:02.885279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.429 [2024-10-28 18:12:02.885368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:46.429 [2024-10-28 18:12:02.885419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.484 ms 00:21:46.429 [2024-10-28 18:12:02.885437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:02.910577] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:46.688 [2024-10-28 18:12:02.910644] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:46.688 [2024-10-28 18:12:02.910693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:02.910716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:46.688 [2024-10-28 18:12:02.910739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.045 ms 00:21:46.688 [2024-10-28 18:12:02.910761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:02.950620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:02.950730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:46.688 [2024-10-28 18:12:02.950775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.711 ms 00:21:46.688 [2024-10-28 18:12:02.950794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:02.969028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:02.969094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:46.688 [2024-10-28 18:12:02.969113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.137 ms 00:21:46.688 [2024-10-28 18:12:02.969125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:02.984980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:02.985026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:46.688 [2024-10-28 18:12:02.985043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.805 ms 00:21:46.688 [2024-10-28 18:12:02.985054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:02.985875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:02.985908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:46.688 [2024-10-28 18:12:02.985923] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:21:46.688 [2024-10-28 18:12:02.985939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:03.059729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:03.059832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:46.688 [2024-10-28 18:12:03.059875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 73.764 ms 00:21:46.688 [2024-10-28 18:12:03.059887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:03.072795] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:46.688 [2024-10-28 18:12:03.075583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:03.075636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:46.688 [2024-10-28 18:12:03.075654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.618 ms 00:21:46.688 [2024-10-28 18:12:03.075666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:03.075788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:03.075808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:46.688 [2024-10-28 18:12:03.075821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:46.688 [2024-10-28 18:12:03.075852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:03.075992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:03.076031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:46.688 [2024-10-28 18:12:03.076046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:21:46.688 [2024-10-28 18:12:03.076057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:03.076097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:03.076113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:46.688 [2024-10-28 18:12:03.076125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:21:46.688 [2024-10-28 18:12:03.076136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:03.076179] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:46.688 [2024-10-28 18:12:03.076199] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:03.076211] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:46.688 [2024-10-28 18:12:03.076222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:21:46.688 [2024-10-28 18:12:03.076234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:03.107763] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:03.107850] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:46.688 [2024-10-28 18:12:03.107871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.504 ms 00:21:46.688 [2024-10-28 18:12:03.107892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:03.107988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:46.688 [2024-10-28 18:12:03.108008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:46.688 [2024-10-28 18:12:03.108020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:21:46.688 [2024-10-28 18:12:03.108031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:46.688 [2024-10-28 18:12:03.109319] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 376.173 ms, result 0 00:21:48.062  [2024-10-28T18:12:05.473Z] Copying: 25/1024 [MB] (25 MBps) [2024-10-28T18:12:06.409Z] Copying: 50/1024 [MB] (25 MBps) [2024-10-28T18:12:07.382Z] Copying: 76/1024 [MB] (25 MBps) [2024-10-28T18:12:08.316Z] Copying: 102/1024 [MB] (25 MBps) [2024-10-28T18:12:09.250Z] Copying: 127/1024 [MB] (25 MBps) [2024-10-28T18:12:10.182Z] Copying: 153/1024 [MB] (25 MBps) [2024-10-28T18:12:11.558Z] Copying: 180/1024 [MB] (26 MBps) [2024-10-28T18:12:12.123Z] Copying: 205/1024 [MB] (25 MBps) [2024-10-28T18:12:13.518Z] Copying: 229/1024 [MB] (24 MBps) [2024-10-28T18:12:14.450Z] Copying: 254/1024 [MB] (24 MBps) [2024-10-28T18:12:15.382Z] Copying: 279/1024 [MB] (24 MBps) [2024-10-28T18:12:16.314Z] Copying: 303/1024 [MB] (24 MBps) [2024-10-28T18:12:17.248Z] Copying: 328/1024 [MB] (24 MBps) [2024-10-28T18:12:18.181Z] Copying: 353/1024 [MB] (24 MBps) [2024-10-28T18:12:19.561Z] Copying: 378/1024 [MB] (25 MBps) [2024-10-28T18:12:20.144Z] Copying: 402/1024 [MB] (24 MBps) [2024-10-28T18:12:21.517Z] Copying: 427/1024 [MB] (24 MBps) [2024-10-28T18:12:22.451Z] Copying: 452/1024 [MB] (24 MBps) [2024-10-28T18:12:23.409Z] Copying: 477/1024 [MB] (25 MBps) [2024-10-28T18:12:24.342Z] Copying: 502/1024 [MB] (24 MBps) [2024-10-28T18:12:25.286Z] Copying: 527/1024 [MB] (25 MBps) [2024-10-28T18:12:26.219Z] Copying: 551/1024 [MB] (24 MBps) [2024-10-28T18:12:27.153Z] Copying: 576/1024 [MB] (24 MBps) [2024-10-28T18:12:28.525Z] Copying: 600/1024 [MB] (24 MBps) [2024-10-28T18:12:29.460Z] Copying: 626/1024 [MB] (26 MBps) [2024-10-28T18:12:30.398Z] Copying: 651/1024 [MB] (25 MBps) [2024-10-28T18:12:31.331Z] Copying: 678/1024 [MB] (26 MBps) [2024-10-28T18:12:32.265Z] Copying: 703/1024 [MB] (24 MBps) [2024-10-28T18:12:33.199Z] Copying: 728/1024 [MB] (25 MBps) [2024-10-28T18:12:34.148Z] Copying: 752/1024 [MB] (24 MBps) [2024-10-28T18:12:35.522Z] Copying: 777/1024 [MB] (24 MBps) [2024-10-28T18:12:36.458Z] Copying: 803/1024 [MB] (25 MBps) [2024-10-28T18:12:37.393Z] Copying: 829/1024 [MB] (26 MBps) [2024-10-28T18:12:38.380Z] Copying: 855/1024 [MB] (25 MBps) [2024-10-28T18:12:39.313Z] Copying: 882/1024 [MB] (26 MBps) [2024-10-28T18:12:40.353Z] Copying: 908/1024 [MB] (26 MBps) [2024-10-28T18:12:41.288Z] Copying: 935/1024 [MB] (27 MBps) [2024-10-28T18:12:42.220Z] Copying: 963/1024 [MB] (27 MBps) [2024-10-28T18:12:43.152Z] Copying: 989/1024 [MB] (26 MBps) [2024-10-28T18:12:44.526Z] Copying: 1015/1024 [MB] (26 MBps) [2024-10-28T18:12:44.784Z] Copying: 1048200/1048576 [kB] (7916 kBps) [2024-10-28T18:12:44.784Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-10-28 18:12:44.550942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-10-28 18:12:44.551033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:22:28.306 [2024-10-28 18:12:44.551056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:22:28.306 [2024-10-28 18:12:44.551085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-10-28 18:12:44.553321] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:22:28.306 [2024-10-28 18:12:44.557805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-10-28 18:12:44.557873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:22:28.306 [2024-10-28 18:12:44.557891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.438 ms 00:22:28.306 [2024-10-28 18:12:44.557904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-10-28 18:12:44.571245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-10-28 18:12:44.571310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:22:28.306 [2024-10-28 18:12:44.571330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.073 ms 00:22:28.306 [2024-10-28 18:12:44.571341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-10-28 18:12:44.592562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-10-28 18:12:44.592613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:22:28.306 [2024-10-28 18:12:44.592631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.189 ms 00:22:28.306 [2024-10-28 18:12:44.592643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-10-28 18:12:44.599360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-10-28 18:12:44.599413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:22:28.306 [2024-10-28 18:12:44.599429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.678 ms 00:22:28.306 [2024-10-28 18:12:44.599440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-10-28 18:12:44.630889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-10-28 18:12:44.630950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:22:28.306 [2024-10-28 18:12:44.630967] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.347 ms 00:22:28.306 [2024-10-28 18:12:44.630979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-10-28 18:12:44.648664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-10-28 18:12:44.648731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:22:28.306 [2024-10-28 18:12:44.648748] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.638 ms 00:22:28.306 [2024-10-28 18:12:44.648760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-10-28 18:12:44.747723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-10-28 18:12:44.747801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:22:28.306 [2024-10-28 18:12:44.747821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 98.913 ms 00:22:28.306 [2024-10-28 18:12:44.747848] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.306 [2024-10-28 18:12:44.780180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.306 [2024-10-28 18:12:44.780243] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:22:28.306 [2024-10-28 18:12:44.780260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.305 ms 00:22:28.306 [2024-10-28 18:12:44.780271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.564 [2024-10-28 18:12:44.811694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.564 [2024-10-28 18:12:44.811767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:22:28.564 [2024-10-28 18:12:44.811801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.375 ms 00:22:28.564 [2024-10-28 18:12:44.811812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.565 [2024-10-28 18:12:44.841901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.565 [2024-10-28 18:12:44.841959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:22:28.565 [2024-10-28 18:12:44.841976] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.032 ms 00:22:28.565 [2024-10-28 18:12:44.841987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.565 [2024-10-28 18:12:44.872369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.565 [2024-10-28 18:12:44.872412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:22:28.565 [2024-10-28 18:12:44.872429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.291 ms 00:22:28.565 [2024-10-28 18:12:44.872440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.565 [2024-10-28 18:12:44.872484] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:22:28.565 [2024-10-28 18:12:44.872509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 118528 / 261120 wr_cnt: 1 state: open 00:22:28.565 [2024-10-28 18:12:44.872523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872569] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872626] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872819] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872892] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.872994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873198] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873324] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:22:28.565 [2024-10-28 18:12:44.873503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:22:28.566 [2024-10-28 18:12:44.873672] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:22:28.566 [2024-10-28 18:12:44.873683] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e26672bb-562c-40ba-bbba-bd2e0247fc2e 00:22:28.566 [2024-10-28 18:12:44.873695] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 118528 00:22:28.566 [2024-10-28 18:12:44.873706] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 119488 00:22:28.566 [2024-10-28 18:12:44.873716] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 118528 00:22:28.566 [2024-10-28 18:12:44.873728] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0081 00:22:28.566 [2024-10-28 18:12:44.873738] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:22:28.566 [2024-10-28 18:12:44.873757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:22:28.566 [2024-10-28 18:12:44.873780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:22:28.566 [2024-10-28 18:12:44.873791] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:22:28.566 [2024-10-28 18:12:44.873800] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:22:28.566 [2024-10-28 18:12:44.873811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-10-28 18:12:44.873822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:22:28.566 [2024-10-28 18:12:44.873845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.329 ms 00:22:28.566 [2024-10-28 18:12:44.873859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-10-28 18:12:44.890394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-10-28 18:12:44.890434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:22:28.566 [2024-10-28 18:12:44.890450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.493 ms 00:22:28.566 [2024-10-28 18:12:44.890469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-10-28 18:12:44.890924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:28.566 [2024-10-28 18:12:44.890949] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:22:28.566 [2024-10-28 18:12:44.890962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.428 ms 00:22:28.566 [2024-10-28 18:12:44.890973] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-10-28 18:12:44.935061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.566 [2024-10-28 18:12:44.935110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:28.566 [2024-10-28 18:12:44.935133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.566 [2024-10-28 18:12:44.935146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-10-28 18:12:44.935216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.566 [2024-10-28 18:12:44.935231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:28.566 [2024-10-28 18:12:44.935243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.566 [2024-10-28 18:12:44.935253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-10-28 18:12:44.935361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.566 [2024-10-28 18:12:44.935380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:28.566 [2024-10-28 18:12:44.935393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.566 [2024-10-28 18:12:44.935411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-10-28 18:12:44.935433] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.566 [2024-10-28 18:12:44.935446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:28.566 [2024-10-28 18:12:44.935458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.566 [2024-10-28 18:12:44.935468] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.566 [2024-10-28 18:12:45.039912] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.566 [2024-10-28 18:12:45.039988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:28.566 [2024-10-28 18:12:45.040014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.566 [2024-10-28 18:12:45.040026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.824 [2024-10-28 18:12:45.123316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.824 [2024-10-28 18:12:45.123388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:28.824 [2024-10-28 18:12:45.123422] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.824 [2024-10-28 18:12:45.123434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.824 [2024-10-28 18:12:45.123540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.824 [2024-10-28 18:12:45.123558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:28.824 [2024-10-28 18:12:45.123571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.824 [2024-10-28 18:12:45.123582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.824 [2024-10-28 18:12:45.123635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.824 [2024-10-28 18:12:45.123650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:28.824 [2024-10-28 18:12:45.123662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.824 [2024-10-28 18:12:45.123673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.824 [2024-10-28 18:12:45.123792] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.824 [2024-10-28 18:12:45.123811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:28.824 [2024-10-28 18:12:45.123823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.824 [2024-10-28 18:12:45.123834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.824 [2024-10-28 18:12:45.123920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.824 [2024-10-28 18:12:45.123944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:22:28.824 [2024-10-28 18:12:45.123957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.824 [2024-10-28 18:12:45.123969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.824 [2024-10-28 18:12:45.124012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.824 [2024-10-28 18:12:45.124027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:28.824 [2024-10-28 18:12:45.124038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.824 [2024-10-28 18:12:45.124050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.824 [2024-10-28 18:12:45.124107] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:22:28.824 [2024-10-28 18:12:45.124124] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:28.824 [2024-10-28 18:12:45.124136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:22:28.824 [2024-10-28 18:12:45.124146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:28.824 [2024-10-28 18:12:45.124286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 577.491 ms, result 0 00:22:30.198 00:22:30.198 00:22:30.198 18:12:46 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:22:30.198 [2024-10-28 18:12:46.625867] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:22:30.198 [2024-10-28 18:12:46.626161] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77706 ] 00:22:30.456 [2024-10-28 18:12:46.822378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:30.715 [2024-10-28 18:12:46.954443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:30.972 [2024-10-28 18:12:47.267968] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:30.972 [2024-10-28 18:12:47.268072] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:22:30.973 [2024-10-28 18:12:47.427729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.973 [2024-10-28 18:12:47.427791] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:22:30.973 [2024-10-28 18:12:47.427819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:22:30.973 [2024-10-28 18:12:47.427832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.973 [2024-10-28 18:12:47.427920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.973 [2024-10-28 18:12:47.427939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:22:30.973 [2024-10-28 18:12:47.427956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:22:30.973 [2024-10-28 18:12:47.427967] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.973 [2024-10-28 18:12:47.427998] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:22:30.973 [2024-10-28 18:12:47.428923] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:22:30.973 [2024-10-28 18:12:47.428964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.973 [2024-10-28 18:12:47.428977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:22:30.973 [2024-10-28 18:12:47.428990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.973 ms 00:22:30.973 [2024-10-28 18:12:47.429001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.973 [2024-10-28 18:12:47.430115] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:22:30.973 [2024-10-28 18:12:47.448733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.973 [2024-10-28 18:12:47.448795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:22:30.973 [2024-10-28 18:12:47.448828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.618 ms 00:22:30.973 [2024-10-28 18:12:47.448840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:30.973 [2024-10-28 18:12:47.448931] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:30.973 [2024-10-28 18:12:47.448951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:22:30.973 [2024-10-28 18:12:47.448964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:22:30.973 [2024-10-28 18:12:47.448974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.232 [2024-10-28 18:12:47.453586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.232 [2024-10-28 18:12:47.453648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:22:31.232 [2024-10-28 18:12:47.453680] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.516 ms 00:22:31.232 [2024-10-28 18:12:47.453691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.232 [2024-10-28 18:12:47.453793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.232 [2024-10-28 18:12:47.453811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:22:31.232 [2024-10-28 18:12:47.453824] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:22:31.232 [2024-10-28 18:12:47.453834] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.232 [2024-10-28 18:12:47.453916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.232 [2024-10-28 18:12:47.453934] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:22:31.232 [2024-10-28 18:12:47.453947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:22:31.232 [2024-10-28 18:12:47.453958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.232 [2024-10-28 18:12:47.453991] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:22:31.232 [2024-10-28 18:12:47.458390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.232 [2024-10-28 18:12:47.458433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:22:31.232 [2024-10-28 18:12:47.458448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.407 ms 00:22:31.232 [2024-10-28 18:12:47.458464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.232 [2024-10-28 18:12:47.458504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.232 [2024-10-28 18:12:47.458519] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:22:31.232 [2024-10-28 18:12:47.458531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:22:31.232 [2024-10-28 18:12:47.458541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.232 [2024-10-28 18:12:47.458589] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:22:31.232 [2024-10-28 18:12:47.458619] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:22:31.232 [2024-10-28 18:12:47.458663] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:22:31.232 [2024-10-28 18:12:47.458686] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:22:31.232 [2024-10-28 18:12:47.458799] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:22:31.232 [2024-10-28 18:12:47.458815] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:22:31.232 [2024-10-28 18:12:47.458830] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:22:31.232 [2024-10-28 18:12:47.458908] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:22:31.232 [2024-10-28 18:12:47.458922] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:22:31.232 [2024-10-28 18:12:47.458934] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:22:31.232 [2024-10-28 18:12:47.458945] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:22:31.232 [2024-10-28 18:12:47.458955] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:22:31.232 [2024-10-28 18:12:47.458966] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:22:31.232 [2024-10-28 18:12:47.458984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.232 [2024-10-28 18:12:47.458994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:22:31.232 [2024-10-28 18:12:47.459006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:22:31.232 [2024-10-28 18:12:47.459019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.232 [2024-10-28 18:12:47.459126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.232 [2024-10-28 18:12:47.459143] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:22:31.232 [2024-10-28 18:12:47.459154] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:22:31.232 [2024-10-28 18:12:47.459164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.232 [2024-10-28 18:12:47.459341] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:22:31.232 [2024-10-28 18:12:47.459386] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:22:31.232 [2024-10-28 18:12:47.459408] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:31.232 [2024-10-28 18:12:47.459427] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.232 [2024-10-28 18:12:47.459445] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:22:31.233 [2024-10-28 18:12:47.459462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:22:31.233 [2024-10-28 18:12:47.459478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:22:31.233 [2024-10-28 18:12:47.459493] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:22:31.233 [2024-10-28 18:12:47.459508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:22:31.233 [2024-10-28 18:12:47.459524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:31.233 [2024-10-28 18:12:47.459539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:22:31.233 [2024-10-28 18:12:47.459555] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:22:31.233 [2024-10-28 18:12:47.459572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:22:31.233 [2024-10-28 18:12:47.459589] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:22:31.233 [2024-10-28 18:12:47.459606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:22:31.233 [2024-10-28 18:12:47.459638] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.233 [2024-10-28 18:12:47.459656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:22:31.233 [2024-10-28 18:12:47.459674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:22:31.233 [2024-10-28 18:12:47.459691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.233 [2024-10-28 18:12:47.459708] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:22:31.233 [2024-10-28 18:12:47.459724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:22:31.233 [2024-10-28 18:12:47.459741] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:31.233 [2024-10-28 18:12:47.459759] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:22:31.233 [2024-10-28 18:12:47.459776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:22:31.233 [2024-10-28 18:12:47.459792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:31.233 [2024-10-28 18:12:47.459808] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:22:31.233 [2024-10-28 18:12:47.459823] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:22:31.233 [2024-10-28 18:12:47.459839] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:31.233 [2024-10-28 18:12:47.459879] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:22:31.233 [2024-10-28 18:12:47.459900] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:22:31.233 [2024-10-28 18:12:47.459919] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:22:31.233 [2024-10-28 18:12:47.459936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:22:31.233 [2024-10-28 18:12:47.459954] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:22:31.233 [2024-10-28 18:12:47.459972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:31.233 [2024-10-28 18:12:47.459990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:22:31.233 [2024-10-28 18:12:47.460006] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:22:31.233 [2024-10-28 18:12:47.460023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:22:31.233 [2024-10-28 18:12:47.460041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:22:31.233 [2024-10-28 18:12:47.460060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:22:31.233 [2024-10-28 18:12:47.460075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.233 [2024-10-28 18:12:47.460089] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:22:31.233 [2024-10-28 18:12:47.460104] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:22:31.233 [2024-10-28 18:12:47.460118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.233 [2024-10-28 18:12:47.460133] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:22:31.233 [2024-10-28 18:12:47.460149] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:22:31.233 [2024-10-28 18:12:47.460166] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:22:31.233 [2024-10-28 18:12:47.460182] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:22:31.233 [2024-10-28 18:12:47.460199] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:22:31.233 [2024-10-28 18:12:47.460216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:22:31.233 [2024-10-28 18:12:47.460233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:22:31.233 [2024-10-28 18:12:47.460251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:22:31.233 [2024-10-28 18:12:47.460269] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:22:31.233 [2024-10-28 18:12:47.460287] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:22:31.233 [2024-10-28 18:12:47.460308] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:22:31.233 [2024-10-28 18:12:47.460331] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:31.233 [2024-10-28 18:12:47.460354] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:22:31.233 [2024-10-28 18:12:47.460373] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:22:31.233 [2024-10-28 18:12:47.460392] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:22:31.233 [2024-10-28 18:12:47.460411] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:22:31.233 [2024-10-28 18:12:47.460432] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:22:31.233 [2024-10-28 18:12:47.460451] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:22:31.233 [2024-10-28 18:12:47.460470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:22:31.233 [2024-10-28 18:12:47.460499] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:22:31.233 [2024-10-28 18:12:47.460516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:22:31.233 [2024-10-28 18:12:47.460534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:22:31.233 [2024-10-28 18:12:47.460552] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:22:31.233 [2024-10-28 18:12:47.460569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:22:31.233 [2024-10-28 18:12:47.460585] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:22:31.233 [2024-10-28 18:12:47.460604] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:22:31.233 [2024-10-28 18:12:47.460622] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:22:31.233 [2024-10-28 18:12:47.460652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:22:31.233 [2024-10-28 18:12:47.460673] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:22:31.233 [2024-10-28 18:12:47.460693] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:22:31.233 [2024-10-28 18:12:47.460712] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:22:31.233 [2024-10-28 18:12:47.460731] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:22:31.233 [2024-10-28 18:12:47.460752] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.233 [2024-10-28 18:12:47.460766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:22:31.233 [2024-10-28 18:12:47.460779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.494 ms 00:22:31.233 [2024-10-28 18:12:47.460790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.233 [2024-10-28 18:12:47.493942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.233 [2024-10-28 18:12:47.494003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:22:31.233 [2024-10-28 18:12:47.494041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.062 ms 00:22:31.233 [2024-10-28 18:12:47.494053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.233 [2024-10-28 18:12:47.494172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.233 [2024-10-28 18:12:47.494188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:22:31.233 [2024-10-28 18:12:47.494200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:22:31.233 [2024-10-28 18:12:47.494211] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.233 [2024-10-28 18:12:47.546234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.233 [2024-10-28 18:12:47.546482] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:22:31.233 [2024-10-28 18:12:47.546513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.910 ms 00:22:31.233 [2024-10-28 18:12:47.546527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.233 [2024-10-28 18:12:47.546598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.233 [2024-10-28 18:12:47.546614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:22:31.233 [2024-10-28 18:12:47.546627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:22:31.233 [2024-10-28 18:12:47.546646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.233 [2024-10-28 18:12:47.547064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.233 [2024-10-28 18:12:47.547084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:22:31.233 [2024-10-28 18:12:47.547097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.314 ms 00:22:31.233 [2024-10-28 18:12:47.547109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.233 [2024-10-28 18:12:47.547264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.233 [2024-10-28 18:12:47.547283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:22:31.233 [2024-10-28 18:12:47.547295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.125 ms 00:22:31.233 [2024-10-28 18:12:47.547313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.233 [2024-10-28 18:12:47.563777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.233 [2024-10-28 18:12:47.563822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:22:31.233 [2024-10-28 18:12:47.563892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.438 ms 00:22:31.233 [2024-10-28 18:12:47.563906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.233 [2024-10-28 18:12:47.579957] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:22:31.233 [2024-10-28 18:12:47.580000] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:22:31.233 [2024-10-28 18:12:47.580035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.233 [2024-10-28 18:12:47.580047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:22:31.234 [2024-10-28 18:12:47.580060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.982 ms 00:22:31.234 [2024-10-28 18:12:47.580071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.234 [2024-10-28 18:12:47.608874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.234 [2024-10-28 18:12:47.608921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:22:31.234 [2024-10-28 18:12:47.608954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.756 ms 00:22:31.234 [2024-10-28 18:12:47.608966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.234 [2024-10-28 18:12:47.624531] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.234 [2024-10-28 18:12:47.624585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:22:31.234 [2024-10-28 18:12:47.624602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.518 ms 00:22:31.234 [2024-10-28 18:12:47.624612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.234 [2024-10-28 18:12:47.639829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.234 [2024-10-28 18:12:47.639876] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:22:31.234 [2024-10-28 18:12:47.639908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.173 ms 00:22:31.234 [2024-10-28 18:12:47.639919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.234 [2024-10-28 18:12:47.640681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.234 [2024-10-28 18:12:47.640703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:22:31.234 [2024-10-28 18:12:47.640716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.649 ms 00:22:31.234 [2024-10-28 18:12:47.640731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.492 [2024-10-28 18:12:47.712988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.492 [2024-10-28 18:12:47.713061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:22:31.492 [2024-10-28 18:12:47.713105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.232 ms 00:22:31.492 [2024-10-28 18:12:47.713117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.492 [2024-10-28 18:12:47.726576] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:22:31.492 [2024-10-28 18:12:47.729166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.492 [2024-10-28 18:12:47.729325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:22:31.492 [2024-10-28 18:12:47.729353] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.978 ms 00:22:31.492 [2024-10-28 18:12:47.729366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.492 [2024-10-28 18:12:47.729482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.492 [2024-10-28 18:12:47.729503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:22:31.492 [2024-10-28 18:12:47.729516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:22:31.492 [2024-10-28 18:12:47.729531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.492 [2024-10-28 18:12:47.731120] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.492 [2024-10-28 18:12:47.731161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:22:31.492 [2024-10-28 18:12:47.731177] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.531 ms 00:22:31.492 [2024-10-28 18:12:47.731188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.492 [2024-10-28 18:12:47.731244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.492 [2024-10-28 18:12:47.731260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:22:31.492 [2024-10-28 18:12:47.731272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:22:31.492 [2024-10-28 18:12:47.731282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.492 [2024-10-28 18:12:47.731323] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:22:31.492 [2024-10-28 18:12:47.731343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.492 [2024-10-28 18:12:47.731354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:22:31.492 [2024-10-28 18:12:47.731365] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:22:31.492 [2024-10-28 18:12:47.731375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.492 [2024-10-28 18:12:47.763245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.492 [2024-10-28 18:12:47.763426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:22:31.492 [2024-10-28 18:12:47.763457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.843 ms 00:22:31.492 [2024-10-28 18:12:47.763477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.492 [2024-10-28 18:12:47.763599] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:22:31.492 [2024-10-28 18:12:47.763622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:22:31.492 [2024-10-28 18:12:47.763635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:22:31.492 [2024-10-28 18:12:47.763646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:22:31.492 [2024-10-28 18:12:47.764999] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 336.680 ms, result 0 00:22:32.889  [2024-10-28T18:12:50.300Z] Copying: 23/1024 [MB] (23 MBps) [2024-10-28T18:12:51.232Z] Copying: 50/1024 [MB] (26 MBps) [2024-10-28T18:12:52.165Z] Copying: 76/1024 [MB] (26 MBps) [2024-10-28T18:12:53.099Z] Copying: 101/1024 [MB] (24 MBps) [2024-10-28T18:12:54.057Z] Copying: 127/1024 [MB] (26 MBps) [2024-10-28T18:12:54.989Z] Copying: 154/1024 [MB] (26 MBps) [2024-10-28T18:12:56.360Z] Copying: 181/1024 [MB] (26 MBps) [2024-10-28T18:12:57.293Z] Copying: 207/1024 [MB] (26 MBps) [2024-10-28T18:12:58.227Z] Copying: 234/1024 [MB] (26 MBps) [2024-10-28T18:12:59.157Z] Copying: 260/1024 [MB] (25 MBps) [2024-10-28T18:13:00.090Z] Copying: 286/1024 [MB] (26 MBps) [2024-10-28T18:13:01.025Z] Copying: 313/1024 [MB] (26 MBps) [2024-10-28T18:13:02.397Z] Copying: 340/1024 [MB] (27 MBps) [2024-10-28T18:13:03.330Z] Copying: 367/1024 [MB] (27 MBps) [2024-10-28T18:13:04.264Z] Copying: 394/1024 [MB] (26 MBps) [2024-10-28T18:13:05.196Z] Copying: 422/1024 [MB] (28 MBps) [2024-10-28T18:13:06.193Z] Copying: 449/1024 [MB] (26 MBps) [2024-10-28T18:13:07.127Z] Copying: 475/1024 [MB] (26 MBps) [2024-10-28T18:13:08.062Z] Copying: 501/1024 [MB] (25 MBps) [2024-10-28T18:13:08.997Z] Copying: 525/1024 [MB] (23 MBps) [2024-10-28T18:13:10.373Z] Copying: 549/1024 [MB] (24 MBps) [2024-10-28T18:13:11.309Z] Copying: 576/1024 [MB] (26 MBps) [2024-10-28T18:13:12.247Z] Copying: 605/1024 [MB] (28 MBps) [2024-10-28T18:13:13.179Z] Copying: 633/1024 [MB] (28 MBps) [2024-10-28T18:13:14.113Z] Copying: 661/1024 [MB] (27 MBps) [2024-10-28T18:13:15.047Z] Copying: 688/1024 [MB] (27 MBps) [2024-10-28T18:13:16.420Z] Copying: 715/1024 [MB] (27 MBps) [2024-10-28T18:13:17.353Z] Copying: 741/1024 [MB] (26 MBps) [2024-10-28T18:13:18.286Z] Copying: 767/1024 [MB] (25 MBps) [2024-10-28T18:13:19.221Z] Copying: 791/1024 [MB] (24 MBps) [2024-10-28T18:13:20.157Z] Copying: 817/1024 [MB] (25 MBps) [2024-10-28T18:13:21.091Z] Copying: 843/1024 [MB] (25 MBps) [2024-10-28T18:13:22.024Z] Copying: 867/1024 [MB] (23 MBps) [2024-10-28T18:13:23.012Z] Copying: 892/1024 [MB] (25 MBps) [2024-10-28T18:13:24.387Z] Copying: 918/1024 [MB] (26 MBps) [2024-10-28T18:13:25.322Z] Copying: 944/1024 [MB] (25 MBps) [2024-10-28T18:13:26.257Z] Copying: 969/1024 [MB] (24 MBps) [2024-10-28T18:13:27.192Z] Copying: 994/1024 [MB] (25 MBps) [2024-10-28T18:13:27.192Z] Copying: 1020/1024 [MB] (25 MBps) [2024-10-28T18:13:27.759Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-28 18:13:27.469455] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.281 [2024-10-28 18:13:27.469731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:11.281 [2024-10-28 18:13:27.469917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:11.281 [2024-10-28 18:13:27.470040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.281 [2024-10-28 18:13:27.470114] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:11.281 [2024-10-28 18:13:27.474060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.281 [2024-10-28 18:13:27.474099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:11.281 [2024-10-28 18:13:27.474116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.920 ms 00:23:11.281 [2024-10-28 18:13:27.474128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.281 [2024-10-28 18:13:27.474371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.281 [2024-10-28 18:13:27.474396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:11.281 [2024-10-28 18:13:27.474410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:23:11.281 [2024-10-28 18:13:27.474421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.281 [2024-10-28 18:13:27.479512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.281 [2024-10-28 18:13:27.479568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:11.281 [2024-10-28 18:13:27.479585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.065 ms 00:23:11.281 [2024-10-28 18:13:27.479597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.281 [2024-10-28 18:13:27.487288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.281 [2024-10-28 18:13:27.487325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:11.281 [2024-10-28 18:13:27.487339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.647 ms 00:23:11.281 [2024-10-28 18:13:27.487349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.281 [2024-10-28 18:13:27.517044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.281 [2024-10-28 18:13:27.517085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:11.281 [2024-10-28 18:13:27.517101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.638 ms 00:23:11.281 [2024-10-28 18:13:27.517112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.281 [2024-10-28 18:13:27.533663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.281 [2024-10-28 18:13:27.533711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:11.281 [2024-10-28 18:13:27.533728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.508 ms 00:23:11.281 [2024-10-28 18:13:27.533739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.281 [2024-10-28 18:13:27.649190] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.281 [2024-10-28 18:13:27.649251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:11.281 [2024-10-28 18:13:27.649271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 115.406 ms 00:23:11.281 [2024-10-28 18:13:27.649282] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.281 [2024-10-28 18:13:27.684413] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.281 [2024-10-28 18:13:27.684463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:11.281 [2024-10-28 18:13:27.684480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.107 ms 00:23:11.281 [2024-10-28 18:13:27.684490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.281 [2024-10-28 18:13:27.732018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.281 [2024-10-28 18:13:27.732075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:11.281 [2024-10-28 18:13:27.732116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.479 ms 00:23:11.281 [2024-10-28 18:13:27.732132] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.542 [2024-10-28 18:13:27.780595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.542 [2024-10-28 18:13:27.780657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:11.542 [2024-10-28 18:13:27.780682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.406 ms 00:23:11.542 [2024-10-28 18:13:27.780699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.542 [2024-10-28 18:13:27.819407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.542 [2024-10-28 18:13:27.819587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:11.542 [2024-10-28 18:13:27.819616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.513 ms 00:23:11.542 [2024-10-28 18:13:27.819628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.542 [2024-10-28 18:13:27.819676] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:11.542 [2024-10-28 18:13:27.819701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:23:11.542 [2024-10-28 18:13:27.819717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:11.542 [2024-10-28 18:13:27.819729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:11.542 [2024-10-28 18:13:27.819740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:11.542 [2024-10-28 18:13:27.819752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:11.542 [2024-10-28 18:13:27.819763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:11.542 [2024-10-28 18:13:27.819775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819786] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819894] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.819998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820091] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820148] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820762] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:11.543 [2024-10-28 18:13:27.820889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:11.544 [2024-10-28 18:13:27.820900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:11.544 [2024-10-28 18:13:27.820912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:11.544 [2024-10-28 18:13:27.820933] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:11.544 [2024-10-28 18:13:27.820945] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: e26672bb-562c-40ba-bbba-bd2e0247fc2e 00:23:11.544 [2024-10-28 18:13:27.820956] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:23:11.544 [2024-10-28 18:13:27.820967] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 13504 00:23:11.544 [2024-10-28 18:13:27.820977] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 12544 00:23:11.544 [2024-10-28 18:13:27.820990] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0765 00:23:11.544 [2024-10-28 18:13:27.821000] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:11.544 [2024-10-28 18:13:27.821018] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:11.544 [2024-10-28 18:13:27.821029] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:11.544 [2024-10-28 18:13:27.821050] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:11.544 [2024-10-28 18:13:27.821060] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:11.544 [2024-10-28 18:13:27.821072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.544 [2024-10-28 18:13:27.821084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:11.544 [2024-10-28 18:13:27.821095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.397 ms 00:23:11.544 [2024-10-28 18:13:27.821106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.544 [2024-10-28 18:13:27.838228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.544 [2024-10-28 18:13:27.838298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:11.544 [2024-10-28 18:13:27.838314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.077 ms 00:23:11.544 [2024-10-28 18:13:27.838333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.544 [2024-10-28 18:13:27.838791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:11.544 [2024-10-28 18:13:27.838826] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:11.544 [2024-10-28 18:13:27.838877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.431 ms 00:23:11.544 [2024-10-28 18:13:27.838891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.544 [2024-10-28 18:13:27.880692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.544 [2024-10-28 18:13:27.880752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:11.544 [2024-10-28 18:13:27.880774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.544 [2024-10-28 18:13:27.880785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.544 [2024-10-28 18:13:27.880857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.544 [2024-10-28 18:13:27.880873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:11.544 [2024-10-28 18:13:27.880884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.544 [2024-10-28 18:13:27.880894] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.544 [2024-10-28 18:13:27.880997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.544 [2024-10-28 18:13:27.881049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:11.544 [2024-10-28 18:13:27.881061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.544 [2024-10-28 18:13:27.881079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.544 [2024-10-28 18:13:27.881102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.544 [2024-10-28 18:13:27.881115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:11.544 [2024-10-28 18:13:27.881126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.544 [2024-10-28 18:13:27.881137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.544 [2024-10-28 18:13:27.981704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.544 [2024-10-28 18:13:27.981775] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:11.544 [2024-10-28 18:13:27.981802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.544 [2024-10-28 18:13:27.981814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.803 [2024-10-28 18:13:28.061821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.803 [2024-10-28 18:13:28.061923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:11.803 [2024-10-28 18:13:28.061941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.803 [2024-10-28 18:13:28.061952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.803 [2024-10-28 18:13:28.062070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.803 [2024-10-28 18:13:28.062088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:11.803 [2024-10-28 18:13:28.062100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.803 [2024-10-28 18:13:28.062110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.803 [2024-10-28 18:13:28.062180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.803 [2024-10-28 18:13:28.062196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:11.803 [2024-10-28 18:13:28.062207] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.803 [2024-10-28 18:13:28.062218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.803 [2024-10-28 18:13:28.062337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.803 [2024-10-28 18:13:28.062355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:11.803 [2024-10-28 18:13:28.062367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.803 [2024-10-28 18:13:28.062379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.803 [2024-10-28 18:13:28.062432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.803 [2024-10-28 18:13:28.062449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:11.803 [2024-10-28 18:13:28.062462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.803 [2024-10-28 18:13:28.062472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.803 [2024-10-28 18:13:28.062518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.803 [2024-10-28 18:13:28.062544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:11.803 [2024-10-28 18:13:28.062556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.803 [2024-10-28 18:13:28.062566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.803 [2024-10-28 18:13:28.062625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:11.803 [2024-10-28 18:13:28.062643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:11.803 [2024-10-28 18:13:28.062654] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:11.803 [2024-10-28 18:13:28.062665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:11.803 [2024-10-28 18:13:28.062803] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 593.317 ms, result 0 00:23:12.741 00:23:12.741 00:23:12.741 18:13:28 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:15.273 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 76139 00:23:15.273 18:13:31 ftl.ftl_restore -- common/autotest_common.sh@952 -- # '[' -z 76139 ']' 00:23:15.273 18:13:31 ftl.ftl_restore -- common/autotest_common.sh@956 -- # kill -0 76139 00:23:15.273 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (76139) - No such process 00:23:15.273 Process with pid 76139 is not found 00:23:15.273 18:13:31 ftl.ftl_restore -- common/autotest_common.sh@979 -- # echo 'Process with pid 76139 is not found' 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:23:15.273 Remove shared memory files 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:23:15.273 18:13:31 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:23:15.273 00:23:15.273 real 3m16.275s 00:23:15.273 user 3m1.864s 00:23:15.273 sys 0m16.380s 00:23:15.273 18:13:31 ftl.ftl_restore -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:15.273 18:13:31 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:23:15.273 ************************************ 00:23:15.273 END TEST ftl_restore 00:23:15.273 ************************************ 00:23:15.273 18:13:31 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:15.273 18:13:31 ftl -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:23:15.273 18:13:31 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:15.273 18:13:31 ftl -- common/autotest_common.sh@10 -- # set +x 00:23:15.273 ************************************ 00:23:15.273 START TEST ftl_dirty_shutdown 00:23:15.273 ************************************ 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:23:15.273 * Looking for test storage... 00:23:15.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:15.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.273 --rc genhtml_branch_coverage=1 00:23:15.273 --rc genhtml_function_coverage=1 00:23:15.273 --rc genhtml_legend=1 00:23:15.273 --rc geninfo_all_blocks=1 00:23:15.273 --rc geninfo_unexecuted_blocks=1 00:23:15.273 00:23:15.273 ' 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:15.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.273 --rc genhtml_branch_coverage=1 00:23:15.273 --rc genhtml_function_coverage=1 00:23:15.273 --rc genhtml_legend=1 00:23:15.273 --rc geninfo_all_blocks=1 00:23:15.273 --rc geninfo_unexecuted_blocks=1 00:23:15.273 00:23:15.273 ' 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:15.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.273 --rc genhtml_branch_coverage=1 00:23:15.273 --rc genhtml_function_coverage=1 00:23:15.273 --rc genhtml_legend=1 00:23:15.273 --rc geninfo_all_blocks=1 00:23:15.273 --rc geninfo_unexecuted_blocks=1 00:23:15.273 00:23:15.273 ' 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:15.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:15.273 --rc genhtml_branch_coverage=1 00:23:15.273 --rc genhtml_function_coverage=1 00:23:15.273 --rc genhtml_legend=1 00:23:15.273 --rc geninfo_all_blocks=1 00:23:15.273 --rc geninfo_unexecuted_blocks=1 00:23:15.273 00:23:15.273 ' 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:23:15.273 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=78217 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 78217 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@833 -- # '[' -z 78217 ']' 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.274 18:13:31 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:15.532 [2024-10-28 18:13:31.791129] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:23:15.532 [2024-10-28 18:13:31.791353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78217 ] 00:23:15.532 [2024-10-28 18:13:31.976159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.790 [2024-10-28 18:13:32.075128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:16.358 18:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:16.358 18:13:32 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@866 -- # return 0 00:23:16.358 18:13:32 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:23:16.358 18:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:23:16.358 18:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:23:16.358 18:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:23:16.358 18:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:23:16.358 18:13:32 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:23:16.925 18:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:23:16.925 18:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:23:16.925 18:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:23:16.925 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=nvme0n1 00:23:16.925 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:16.925 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:16.925 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:16.925 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:23:17.183 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:17.183 { 00:23:17.183 "name": "nvme0n1", 00:23:17.183 "aliases": [ 00:23:17.183 "e3f2dc01-547c-441e-b202-0c4695e8d97e" 00:23:17.183 ], 00:23:17.183 "product_name": "NVMe disk", 00:23:17.183 "block_size": 4096, 00:23:17.183 "num_blocks": 1310720, 00:23:17.183 "uuid": "e3f2dc01-547c-441e-b202-0c4695e8d97e", 00:23:17.183 "numa_id": -1, 00:23:17.183 "assigned_rate_limits": { 00:23:17.183 "rw_ios_per_sec": 0, 00:23:17.183 "rw_mbytes_per_sec": 0, 00:23:17.183 "r_mbytes_per_sec": 0, 00:23:17.183 "w_mbytes_per_sec": 0 00:23:17.183 }, 00:23:17.183 "claimed": true, 00:23:17.183 "claim_type": "read_many_write_one", 00:23:17.183 "zoned": false, 00:23:17.183 "supported_io_types": { 00:23:17.183 "read": true, 00:23:17.183 "write": true, 00:23:17.183 "unmap": true, 00:23:17.183 "flush": true, 00:23:17.183 "reset": true, 00:23:17.183 "nvme_admin": true, 00:23:17.183 "nvme_io": true, 00:23:17.183 "nvme_io_md": false, 00:23:17.183 "write_zeroes": true, 00:23:17.183 "zcopy": false, 00:23:17.183 "get_zone_info": false, 00:23:17.183 "zone_management": false, 00:23:17.183 "zone_append": false, 00:23:17.183 "compare": true, 00:23:17.183 "compare_and_write": false, 00:23:17.183 "abort": true, 00:23:17.183 "seek_hole": false, 00:23:17.183 "seek_data": false, 00:23:17.183 "copy": true, 00:23:17.183 "nvme_iov_md": false 00:23:17.183 }, 00:23:17.183 "driver_specific": { 00:23:17.183 "nvme": [ 00:23:17.183 { 00:23:17.183 "pci_address": "0000:00:11.0", 00:23:17.183 "trid": { 00:23:17.183 "trtype": "PCIe", 00:23:17.183 "traddr": "0000:00:11.0" 00:23:17.183 }, 00:23:17.183 "ctrlr_data": { 00:23:17.183 "cntlid": 0, 00:23:17.183 "vendor_id": "0x1b36", 00:23:17.183 "model_number": "QEMU NVMe Ctrl", 00:23:17.183 "serial_number": "12341", 00:23:17.183 "firmware_revision": "8.0.0", 00:23:17.183 "subnqn": "nqn.2019-08.org.qemu:12341", 00:23:17.183 "oacs": { 00:23:17.183 "security": 0, 00:23:17.183 "format": 1, 00:23:17.183 "firmware": 0, 00:23:17.183 "ns_manage": 1 00:23:17.184 }, 00:23:17.184 "multi_ctrlr": false, 00:23:17.184 "ana_reporting": false 00:23:17.184 }, 00:23:17.184 "vs": { 00:23:17.184 "nvme_version": "1.4" 00:23:17.184 }, 00:23:17.184 "ns_data": { 00:23:17.184 "id": 1, 00:23:17.184 "can_share": false 00:23:17.184 } 00:23:17.184 } 00:23:17.184 ], 00:23:17.184 "mp_policy": "active_passive" 00:23:17.184 } 00:23:17.184 } 00:23:17.184 ]' 00:23:17.184 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:17.184 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:17.184 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:17.184 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:23:17.184 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:23:17.184 18:13:33 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:23:17.184 18:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:23:17.184 18:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:23:17.184 18:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:23:17.184 18:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:23:17.184 18:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:23:17.501 18:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=45c7aefe-3da6-4784-a5b0-d33c7d21a4fd 00:23:17.501 18:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:23:17.501 18:13:33 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 45c7aefe-3da6-4784-a5b0-d33c7d21a4fd 00:23:17.759 18:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:23:18.017 18:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=061186b3-fefb-4102-90a9-08187fe40ffb 00:23:18.017 18:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 061186b3-fefb-4102-90a9-08187fe40ffb 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:18.582 18:13:34 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:18.840 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:18.840 { 00:23:18.840 "name": "7255fe0f-233f-4e89-a167-f566e9dddd7f", 00:23:18.840 "aliases": [ 00:23:18.840 "lvs/nvme0n1p0" 00:23:18.840 ], 00:23:18.840 "product_name": "Logical Volume", 00:23:18.840 "block_size": 4096, 00:23:18.840 "num_blocks": 26476544, 00:23:18.840 "uuid": "7255fe0f-233f-4e89-a167-f566e9dddd7f", 00:23:18.840 "assigned_rate_limits": { 00:23:18.840 "rw_ios_per_sec": 0, 00:23:18.840 "rw_mbytes_per_sec": 0, 00:23:18.840 "r_mbytes_per_sec": 0, 00:23:18.840 "w_mbytes_per_sec": 0 00:23:18.840 }, 00:23:18.840 "claimed": false, 00:23:18.840 "zoned": false, 00:23:18.840 "supported_io_types": { 00:23:18.840 "read": true, 00:23:18.840 "write": true, 00:23:18.840 "unmap": true, 00:23:18.840 "flush": false, 00:23:18.840 "reset": true, 00:23:18.840 "nvme_admin": false, 00:23:18.840 "nvme_io": false, 00:23:18.840 "nvme_io_md": false, 00:23:18.840 "write_zeroes": true, 00:23:18.840 "zcopy": false, 00:23:18.840 "get_zone_info": false, 00:23:18.840 "zone_management": false, 00:23:18.840 "zone_append": false, 00:23:18.840 "compare": false, 00:23:18.840 "compare_and_write": false, 00:23:18.840 "abort": false, 00:23:18.840 "seek_hole": true, 00:23:18.840 "seek_data": true, 00:23:18.840 "copy": false, 00:23:18.840 "nvme_iov_md": false 00:23:18.840 }, 00:23:18.840 "driver_specific": { 00:23:18.840 "lvol": { 00:23:18.840 "lvol_store_uuid": "061186b3-fefb-4102-90a9-08187fe40ffb", 00:23:18.840 "base_bdev": "nvme0n1", 00:23:18.840 "thin_provision": true, 00:23:18.840 "num_allocated_clusters": 0, 00:23:18.840 "snapshot": false, 00:23:18.840 "clone": false, 00:23:18.840 "esnap_clone": false 00:23:18.840 } 00:23:18.840 } 00:23:18.840 } 00:23:18.840 ]' 00:23:18.840 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:18.840 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:18.840 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:18.840 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:18.840 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:18.840 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:18.840 18:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:23:18.840 18:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:23:18.840 18:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:23:19.098 18:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:23:19.098 18:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:23:19.098 18:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:19.098 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:19.098 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:19.098 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:19.098 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:19.098 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:19.663 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:19.663 { 00:23:19.663 "name": "7255fe0f-233f-4e89-a167-f566e9dddd7f", 00:23:19.663 "aliases": [ 00:23:19.663 "lvs/nvme0n1p0" 00:23:19.663 ], 00:23:19.663 "product_name": "Logical Volume", 00:23:19.663 "block_size": 4096, 00:23:19.663 "num_blocks": 26476544, 00:23:19.663 "uuid": "7255fe0f-233f-4e89-a167-f566e9dddd7f", 00:23:19.663 "assigned_rate_limits": { 00:23:19.663 "rw_ios_per_sec": 0, 00:23:19.663 "rw_mbytes_per_sec": 0, 00:23:19.663 "r_mbytes_per_sec": 0, 00:23:19.663 "w_mbytes_per_sec": 0 00:23:19.663 }, 00:23:19.663 "claimed": false, 00:23:19.663 "zoned": false, 00:23:19.663 "supported_io_types": { 00:23:19.663 "read": true, 00:23:19.663 "write": true, 00:23:19.663 "unmap": true, 00:23:19.663 "flush": false, 00:23:19.663 "reset": true, 00:23:19.663 "nvme_admin": false, 00:23:19.663 "nvme_io": false, 00:23:19.663 "nvme_io_md": false, 00:23:19.663 "write_zeroes": true, 00:23:19.663 "zcopy": false, 00:23:19.663 "get_zone_info": false, 00:23:19.663 "zone_management": false, 00:23:19.663 "zone_append": false, 00:23:19.663 "compare": false, 00:23:19.663 "compare_and_write": false, 00:23:19.663 "abort": false, 00:23:19.663 "seek_hole": true, 00:23:19.663 "seek_data": true, 00:23:19.663 "copy": false, 00:23:19.663 "nvme_iov_md": false 00:23:19.663 }, 00:23:19.663 "driver_specific": { 00:23:19.663 "lvol": { 00:23:19.663 "lvol_store_uuid": "061186b3-fefb-4102-90a9-08187fe40ffb", 00:23:19.663 "base_bdev": "nvme0n1", 00:23:19.663 "thin_provision": true, 00:23:19.663 "num_allocated_clusters": 0, 00:23:19.663 "snapshot": false, 00:23:19.663 "clone": false, 00:23:19.663 "esnap_clone": false 00:23:19.663 } 00:23:19.663 } 00:23:19.663 } 00:23:19.663 ]' 00:23:19.663 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:19.663 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:19.663 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:19.663 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:19.663 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:19.663 18:13:35 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:19.663 18:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:23:19.663 18:13:35 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:23:19.920 18:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:23:19.920 18:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:19.920 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:19.920 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:23:19.920 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:23:19.920 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:23:19.920 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7255fe0f-233f-4e89-a167-f566e9dddd7f 00:23:20.177 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:23:20.177 { 00:23:20.177 "name": "7255fe0f-233f-4e89-a167-f566e9dddd7f", 00:23:20.177 "aliases": [ 00:23:20.177 "lvs/nvme0n1p0" 00:23:20.177 ], 00:23:20.177 "product_name": "Logical Volume", 00:23:20.177 "block_size": 4096, 00:23:20.177 "num_blocks": 26476544, 00:23:20.177 "uuid": "7255fe0f-233f-4e89-a167-f566e9dddd7f", 00:23:20.177 "assigned_rate_limits": { 00:23:20.177 "rw_ios_per_sec": 0, 00:23:20.177 "rw_mbytes_per_sec": 0, 00:23:20.177 "r_mbytes_per_sec": 0, 00:23:20.177 "w_mbytes_per_sec": 0 00:23:20.177 }, 00:23:20.177 "claimed": false, 00:23:20.177 "zoned": false, 00:23:20.177 "supported_io_types": { 00:23:20.178 "read": true, 00:23:20.178 "write": true, 00:23:20.178 "unmap": true, 00:23:20.178 "flush": false, 00:23:20.178 "reset": true, 00:23:20.178 "nvme_admin": false, 00:23:20.178 "nvme_io": false, 00:23:20.178 "nvme_io_md": false, 00:23:20.178 "write_zeroes": true, 00:23:20.178 "zcopy": false, 00:23:20.178 "get_zone_info": false, 00:23:20.178 "zone_management": false, 00:23:20.178 "zone_append": false, 00:23:20.178 "compare": false, 00:23:20.178 "compare_and_write": false, 00:23:20.178 "abort": false, 00:23:20.178 "seek_hole": true, 00:23:20.178 "seek_data": true, 00:23:20.178 "copy": false, 00:23:20.178 "nvme_iov_md": false 00:23:20.178 }, 00:23:20.178 "driver_specific": { 00:23:20.178 "lvol": { 00:23:20.178 "lvol_store_uuid": "061186b3-fefb-4102-90a9-08187fe40ffb", 00:23:20.178 "base_bdev": "nvme0n1", 00:23:20.178 "thin_provision": true, 00:23:20.178 "num_allocated_clusters": 0, 00:23:20.178 "snapshot": false, 00:23:20.178 "clone": false, 00:23:20.178 "esnap_clone": false 00:23:20.178 } 00:23:20.178 } 00:23:20.178 } 00:23:20.178 ]' 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # nb=26476544 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=103424 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1390 -- # echo 103424 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 7255fe0f-233f-4e89-a167-f566e9dddd7f --l2p_dram_limit 10' 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:23:20.178 18:13:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 7255fe0f-233f-4e89-a167-f566e9dddd7f --l2p_dram_limit 10 -c nvc0n1p0 00:23:20.744 [2024-10-28 18:13:36.921224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.744 [2024-10-28 18:13:36.921307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:20.744 [2024-10-28 18:13:36.921334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:23:20.744 [2024-10-28 18:13:36.921350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.744 [2024-10-28 18:13:36.921437] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.744 [2024-10-28 18:13:36.921458] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:20.744 [2024-10-28 18:13:36.921476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:23:20.744 [2024-10-28 18:13:36.921490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.744 [2024-10-28 18:13:36.921533] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:20.744 [2024-10-28 18:13:36.922515] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:20.744 [2024-10-28 18:13:36.922559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.744 [2024-10-28 18:13:36.922587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:20.744 [2024-10-28 18:13:36.922605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.037 ms 00:23:20.744 [2024-10-28 18:13:36.922620] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.744 [2024-10-28 18:13:36.922768] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 97f41789-0530-4005-80e7-a5ffa4625272 00:23:20.744 [2024-10-28 18:13:36.923824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.744 [2024-10-28 18:13:36.923892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:23:20.744 [2024-10-28 18:13:36.923912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:23:20.744 [2024-10-28 18:13:36.923931] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.744 [2024-10-28 18:13:36.928465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.744 [2024-10-28 18:13:36.928521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:20.744 [2024-10-28 18:13:36.928543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.469 ms 00:23:20.744 [2024-10-28 18:13:36.928560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.744 [2024-10-28 18:13:36.928695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.744 [2024-10-28 18:13:36.928721] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:20.744 [2024-10-28 18:13:36.928738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:23:20.744 [2024-10-28 18:13:36.928760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.744 [2024-10-28 18:13:36.928857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.744 [2024-10-28 18:13:36.928883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:20.744 [2024-10-28 18:13:36.928900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:23:20.744 [2024-10-28 18:13:36.928922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.744 [2024-10-28 18:13:36.928958] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:20.744 [2024-10-28 18:13:36.933503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.744 [2024-10-28 18:13:36.933546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:20.744 [2024-10-28 18:13:36.933568] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.550 ms 00:23:20.745 [2024-10-28 18:13:36.933584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.745 [2024-10-28 18:13:36.933648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.745 [2024-10-28 18:13:36.933667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:20.745 [2024-10-28 18:13:36.933685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:23:20.745 [2024-10-28 18:13:36.933700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.745 [2024-10-28 18:13:36.933750] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:23:20.745 [2024-10-28 18:13:36.933928] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:20.745 [2024-10-28 18:13:36.933968] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:20.745 [2024-10-28 18:13:36.933988] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:20.745 [2024-10-28 18:13:36.934008] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:20.745 [2024-10-28 18:13:36.934027] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:20.745 [2024-10-28 18:13:36.934045] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:20.745 [2024-10-28 18:13:36.934059] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:20.745 [2024-10-28 18:13:36.934080] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:20.745 [2024-10-28 18:13:36.934094] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:20.745 [2024-10-28 18:13:36.934112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.745 [2024-10-28 18:13:36.934127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:20.745 [2024-10-28 18:13:36.934145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:23:20.745 [2024-10-28 18:13:36.934173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.745 [2024-10-28 18:13:36.934277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.745 [2024-10-28 18:13:36.934295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:20.745 [2024-10-28 18:13:36.934313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:23:20.745 [2024-10-28 18:13:36.934328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.745 [2024-10-28 18:13:36.934446] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:20.745 [2024-10-28 18:13:36.934482] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:20.745 [2024-10-28 18:13:36.934502] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:20.745 [2024-10-28 18:13:36.934518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.745 [2024-10-28 18:13:36.934536] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:20.745 [2024-10-28 18:13:36.934551] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:20.745 [2024-10-28 18:13:36.934578] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:20.745 [2024-10-28 18:13:36.934595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:20.745 [2024-10-28 18:13:36.934612] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:20.745 [2024-10-28 18:13:36.934626] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:20.745 [2024-10-28 18:13:36.934642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:20.745 [2024-10-28 18:13:36.934657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:20.745 [2024-10-28 18:13:36.934673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:20.745 [2024-10-28 18:13:36.934687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:20.745 [2024-10-28 18:13:36.934704] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:20.745 [2024-10-28 18:13:36.934719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.745 [2024-10-28 18:13:36.934737] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:20.745 [2024-10-28 18:13:36.934752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:20.745 [2024-10-28 18:13:36.934771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.745 [2024-10-28 18:13:36.934786] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:20.745 [2024-10-28 18:13:36.934803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:20.745 [2024-10-28 18:13:36.934817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.745 [2024-10-28 18:13:36.934833] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:20.745 [2024-10-28 18:13:36.934869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:20.745 [2024-10-28 18:13:36.934888] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.745 [2024-10-28 18:13:36.934903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:20.745 [2024-10-28 18:13:36.934919] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:20.745 [2024-10-28 18:13:36.934934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.745 [2024-10-28 18:13:36.934950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:20.745 [2024-10-28 18:13:36.934966] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:20.745 [2024-10-28 18:13:36.934984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:20.745 [2024-10-28 18:13:36.934998] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:20.745 [2024-10-28 18:13:36.935017] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:20.745 [2024-10-28 18:13:36.935036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:20.745 [2024-10-28 18:13:36.935053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:20.745 [2024-10-28 18:13:36.935068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:20.745 [2024-10-28 18:13:36.935084] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:20.745 [2024-10-28 18:13:36.935098] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:20.745 [2024-10-28 18:13:36.935115] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:20.745 [2024-10-28 18:13:36.935129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.745 [2024-10-28 18:13:36.935146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:20.745 [2024-10-28 18:13:36.935161] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:20.745 [2024-10-28 18:13:36.935177] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.745 [2024-10-28 18:13:36.935191] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:20.745 [2024-10-28 18:13:36.935209] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:20.745 [2024-10-28 18:13:36.935223] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:20.745 [2024-10-28 18:13:36.935243] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:20.745 [2024-10-28 18:13:36.935259] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:20.745 [2024-10-28 18:13:36.935278] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:20.745 [2024-10-28 18:13:36.935292] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:20.745 [2024-10-28 18:13:36.935309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:20.745 [2024-10-28 18:13:36.935324] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:20.745 [2024-10-28 18:13:36.935341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:20.745 [2024-10-28 18:13:36.935360] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:20.745 [2024-10-28 18:13:36.935380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:20.745 [2024-10-28 18:13:36.935400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:20.745 [2024-10-28 18:13:36.935418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:20.745 [2024-10-28 18:13:36.935433] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:20.745 [2024-10-28 18:13:36.935449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:20.745 [2024-10-28 18:13:36.935464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:20.745 [2024-10-28 18:13:36.935482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:20.745 [2024-10-28 18:13:36.935498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:20.745 [2024-10-28 18:13:36.935515] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:20.745 [2024-10-28 18:13:36.935530] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:20.745 [2024-10-28 18:13:36.935549] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:20.745 [2024-10-28 18:13:36.935564] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:20.745 [2024-10-28 18:13:36.935583] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:20.745 [2024-10-28 18:13:36.935597] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:20.745 [2024-10-28 18:13:36.935619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:20.745 [2024-10-28 18:13:36.935634] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:20.745 [2024-10-28 18:13:36.935652] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:20.745 [2024-10-28 18:13:36.935668] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:20.745 [2024-10-28 18:13:36.935686] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:20.745 [2024-10-28 18:13:36.935701] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:20.745 [2024-10-28 18:13:36.935717] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:20.745 [2024-10-28 18:13:36.935733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:20.745 [2024-10-28 18:13:36.935750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:20.745 [2024-10-28 18:13:36.935765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.361 ms 00:23:20.745 [2024-10-28 18:13:36.935782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:20.745 [2024-10-28 18:13:36.935852] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:23:20.745 [2024-10-28 18:13:36.935881] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:23:22.647 [2024-10-28 18:13:38.859162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.647 [2024-10-28 18:13:38.859256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:23:22.647 [2024-10-28 18:13:38.859281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1923.320 ms 00:23:22.647 [2024-10-28 18:13:38.859300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.647 [2024-10-28 18:13:38.892578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.647 [2024-10-28 18:13:38.892647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:22.647 [2024-10-28 18:13:38.892670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.981 ms 00:23:22.647 [2024-10-28 18:13:38.892689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.647 [2024-10-28 18:13:38.892887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.647 [2024-10-28 18:13:38.892915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:22.647 [2024-10-28 18:13:38.892933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:23:22.647 [2024-10-28 18:13:38.892965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.647 [2024-10-28 18:13:38.934035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.647 [2024-10-28 18:13:38.934113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:22.647 [2024-10-28 18:13:38.934135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.980 ms 00:23:22.647 [2024-10-28 18:13:38.934153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.647 [2024-10-28 18:13:38.934207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.647 [2024-10-28 18:13:38.934234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:22.647 [2024-10-28 18:13:38.934251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:22.647 [2024-10-28 18:13:38.934267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.647 [2024-10-28 18:13:38.934671] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.647 [2024-10-28 18:13:38.934708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:22.647 [2024-10-28 18:13:38.934726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.324 ms 00:23:22.647 [2024-10-28 18:13:38.934743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.647 [2024-10-28 18:13:38.934905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.647 [2024-10-28 18:13:38.934928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:22.647 [2024-10-28 18:13:38.934947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:23:22.647 [2024-10-28 18:13:38.934966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.647 [2024-10-28 18:13:38.952622] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.647 [2024-10-28 18:13:38.952679] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:22.647 [2024-10-28 18:13:38.952700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.627 ms 00:23:22.647 [2024-10-28 18:13:38.952717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.647 [2024-10-28 18:13:38.966515] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:22.647 [2024-10-28 18:13:38.969342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.647 [2024-10-28 18:13:38.969396] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:22.647 [2024-10-28 18:13:38.969419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.509 ms 00:23:22.647 [2024-10-28 18:13:38.969435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.647 [2024-10-28 18:13:39.039528] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.647 [2024-10-28 18:13:39.039593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:23:22.648 [2024-10-28 18:13:39.039620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 70.049 ms 00:23:22.648 [2024-10-28 18:13:39.039637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.648 [2024-10-28 18:13:39.039877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.648 [2024-10-28 18:13:39.039904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:22.648 [2024-10-28 18:13:39.039926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.177 ms 00:23:22.648 [2024-10-28 18:13:39.039941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.648 [2024-10-28 18:13:39.071734] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.648 [2024-10-28 18:13:39.071781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:23:22.648 [2024-10-28 18:13:39.071805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.717 ms 00:23:22.648 [2024-10-28 18:13:39.071821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.648 [2024-10-28 18:13:39.103092] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.648 [2024-10-28 18:13:39.103138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:23:22.648 [2024-10-28 18:13:39.103162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.189 ms 00:23:22.648 [2024-10-28 18:13:39.103177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.648 [2024-10-28 18:13:39.103927] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.648 [2024-10-28 18:13:39.103966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:22.648 [2024-10-28 18:13:39.103987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.695 ms 00:23:22.648 [2024-10-28 18:13:39.104002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.911 [2024-10-28 18:13:39.187942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.911 [2024-10-28 18:13:39.188014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:23:22.911 [2024-10-28 18:13:39.188045] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 83.827 ms 00:23:22.911 [2024-10-28 18:13:39.188061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.911 [2024-10-28 18:13:39.220395] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.911 [2024-10-28 18:13:39.220449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:23:22.911 [2024-10-28 18:13:39.220473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.216 ms 00:23:22.911 [2024-10-28 18:13:39.220489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.911 [2024-10-28 18:13:39.252062] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.911 [2024-10-28 18:13:39.252108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:23:22.911 [2024-10-28 18:13:39.252142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.507 ms 00:23:22.911 [2024-10-28 18:13:39.252157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.911 [2024-10-28 18:13:39.283849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.911 [2024-10-28 18:13:39.283895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:22.911 [2024-10-28 18:13:39.283919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.625 ms 00:23:22.911 [2024-10-28 18:13:39.283934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.912 [2024-10-28 18:13:39.283997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.912 [2024-10-28 18:13:39.284017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:22.912 [2024-10-28 18:13:39.284038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:23:22.912 [2024-10-28 18:13:39.284053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.912 [2024-10-28 18:13:39.284180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:22.912 [2024-10-28 18:13:39.284202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:22.912 [2024-10-28 18:13:39.284225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:23:22.912 [2024-10-28 18:13:39.284240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:22.912 [2024-10-28 18:13:39.285306] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2363.604 ms, result 0 00:23:22.912 { 00:23:22.912 "name": "ftl0", 00:23:22.912 "uuid": "97f41789-0530-4005-80e7-a5ffa4625272" 00:23:22.912 } 00:23:22.912 18:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:23:22.912 18:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:23:23.171 18:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:23:23.171 18:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:23:23.171 18:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:23:23.428 /dev/nbd0 00:23:23.428 18:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:23:23.428 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:23:23.428 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@871 -- # local i 00:23:23.428 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:23:23.428 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:23:23.428 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:23:23.428 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # break 00:23:23.428 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:23:23.428 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:23:23.428 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:23:23.428 1+0 records in 00:23:23.428 1+0 records out 00:23:23.428 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000288777 s, 14.2 MB/s 00:23:23.428 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:23.685 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # size=4096 00:23:23.686 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:23:23.686 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:23:23.686 18:13:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # return 0 00:23:23.686 18:13:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:23:23.686 [2024-10-28 18:13:40.032583] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:23:23.686 [2024-10-28 18:13:40.032800] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78359 ] 00:23:23.943 [2024-10-28 18:13:40.206456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:23.943 [2024-10-28 18:13:40.307147] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:25.319  [2024-10-28T18:13:42.733Z] Copying: 167/1024 [MB] (167 MBps) [2024-10-28T18:13:43.666Z] Copying: 335/1024 [MB] (167 MBps) [2024-10-28T18:13:44.600Z] Copying: 503/1024 [MB] (168 MBps) [2024-10-28T18:13:45.989Z] Copying: 672/1024 [MB] (168 MBps) [2024-10-28T18:13:46.934Z] Copying: 836/1024 [MB] (163 MBps) [2024-10-28T18:13:46.934Z] Copying: 987/1024 [MB] (150 MBps) [2024-10-28T18:13:47.868Z] Copying: 1024/1024 [MB] (average 164 MBps) 00:23:31.390 00:23:31.648 18:13:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:23:34.181 18:13:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:23:34.181 [2024-10-28 18:13:50.160543] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:23:34.181 [2024-10-28 18:13:50.160709] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78465 ] 00:23:34.181 [2024-10-28 18:13:50.335473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.181 [2024-10-28 18:13:50.463071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.567  [2024-10-28T18:13:52.982Z] Copying: 15/1024 [MB] (15 MBps) [2024-10-28T18:13:53.918Z] Copying: 30/1024 [MB] (15 MBps) [2024-10-28T18:13:54.853Z] Copying: 46/1024 [MB] (15 MBps) [2024-10-28T18:13:55.786Z] Copying: 62/1024 [MB] (15 MBps) [2024-10-28T18:13:57.158Z] Copying: 78/1024 [MB] (16 MBps) [2024-10-28T18:13:58.093Z] Copying: 94/1024 [MB] (15 MBps) [2024-10-28T18:13:59.050Z] Copying: 110/1024 [MB] (15 MBps) [2024-10-28T18:13:59.994Z] Copying: 126/1024 [MB] (16 MBps) [2024-10-28T18:14:00.928Z] Copying: 142/1024 [MB] (16 MBps) [2024-10-28T18:14:01.862Z] Copying: 158/1024 [MB] (16 MBps) [2024-10-28T18:14:02.797Z] Copying: 175/1024 [MB] (16 MBps) [2024-10-28T18:14:04.170Z] Copying: 191/1024 [MB] (16 MBps) [2024-10-28T18:14:05.103Z] Copying: 207/1024 [MB] (15 MBps) [2024-10-28T18:14:06.069Z] Copying: 223/1024 [MB] (15 MBps) [2024-10-28T18:14:07.004Z] Copying: 239/1024 [MB] (16 MBps) [2024-10-28T18:14:07.938Z] Copying: 255/1024 [MB] (15 MBps) [2024-10-28T18:14:08.873Z] Copying: 271/1024 [MB] (16 MBps) [2024-10-28T18:14:09.807Z] Copying: 286/1024 [MB] (15 MBps) [2024-10-28T18:14:11.181Z] Copying: 302/1024 [MB] (16 MBps) [2024-10-28T18:14:12.115Z] Copying: 319/1024 [MB] (16 MBps) [2024-10-28T18:14:13.050Z] Copying: 336/1024 [MB] (16 MBps) [2024-10-28T18:14:14.032Z] Copying: 352/1024 [MB] (16 MBps) [2024-10-28T18:14:14.968Z] Copying: 369/1024 [MB] (16 MBps) [2024-10-28T18:14:15.903Z] Copying: 386/1024 [MB] (16 MBps) [2024-10-28T18:14:16.839Z] Copying: 403/1024 [MB] (16 MBps) [2024-10-28T18:14:17.774Z] Copying: 420/1024 [MB] (16 MBps) [2024-10-28T18:14:19.148Z] Copying: 437/1024 [MB] (16 MBps) [2024-10-28T18:14:20.082Z] Copying: 453/1024 [MB] (16 MBps) [2024-10-28T18:14:21.016Z] Copying: 470/1024 [MB] (16 MBps) [2024-10-28T18:14:21.950Z] Copying: 486/1024 [MB] (16 MBps) [2024-10-28T18:14:22.884Z] Copying: 502/1024 [MB] (15 MBps) [2024-10-28T18:14:23.817Z] Copying: 518/1024 [MB] (16 MBps) [2024-10-28T18:14:24.753Z] Copying: 534/1024 [MB] (15 MBps) [2024-10-28T18:14:26.126Z] Copying: 549/1024 [MB] (15 MBps) [2024-10-28T18:14:27.059Z] Copying: 565/1024 [MB] (15 MBps) [2024-10-28T18:14:27.996Z] Copying: 580/1024 [MB] (15 MBps) [2024-10-28T18:14:28.930Z] Copying: 597/1024 [MB] (16 MBps) [2024-10-28T18:14:29.865Z] Copying: 613/1024 [MB] (16 MBps) [2024-10-28T18:14:30.840Z] Copying: 629/1024 [MB] (16 MBps) [2024-10-28T18:14:31.774Z] Copying: 647/1024 [MB] (17 MBps) [2024-10-28T18:14:33.151Z] Copying: 664/1024 [MB] (17 MBps) [2024-10-28T18:14:34.084Z] Copying: 681/1024 [MB] (17 MBps) [2024-10-28T18:14:35.015Z] Copying: 699/1024 [MB] (17 MBps) [2024-10-28T18:14:35.947Z] Copying: 716/1024 [MB] (17 MBps) [2024-10-28T18:14:36.882Z] Copying: 733/1024 [MB] (17 MBps) [2024-10-28T18:14:37.817Z] Copying: 750/1024 [MB] (16 MBps) [2024-10-28T18:14:38.751Z] Copying: 767/1024 [MB] (16 MBps) [2024-10-28T18:14:40.131Z] Copying: 783/1024 [MB] (15 MBps) [2024-10-28T18:14:41.061Z] Copying: 800/1024 [MB] (17 MBps) [2024-10-28T18:14:41.993Z] Copying: 816/1024 [MB] (16 MBps) [2024-10-28T18:14:42.929Z] Copying: 832/1024 [MB] (16 MBps) [2024-10-28T18:14:43.861Z] Copying: 850/1024 [MB] (17 MBps) [2024-10-28T18:14:44.796Z] Copying: 866/1024 [MB] (16 MBps) [2024-10-28T18:14:46.171Z] Copying: 881/1024 [MB] (14 MBps) [2024-10-28T18:14:47.105Z] Copying: 896/1024 [MB] (14 MBps) [2024-10-28T18:14:48.040Z] Copying: 912/1024 [MB] (15 MBps) [2024-10-28T18:14:49.030Z] Copying: 927/1024 [MB] (15 MBps) [2024-10-28T18:14:49.964Z] Copying: 943/1024 [MB] (15 MBps) [2024-10-28T18:14:50.897Z] Copying: 959/1024 [MB] (16 MBps) [2024-10-28T18:14:51.836Z] Copying: 976/1024 [MB] (16 MBps) [2024-10-28T18:14:52.791Z] Copying: 992/1024 [MB] (16 MBps) [2024-10-28T18:14:54.165Z] Copying: 1007/1024 [MB] (15 MBps) [2024-10-28T18:14:54.732Z] Copying: 1024/1024 [MB] (average 16 MBps) 00:24:38.254 00:24:38.513 18:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:24:38.513 18:14:54 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:24:38.771 18:14:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:24:39.028 [2024-10-28 18:14:55.312691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.028 [2024-10-28 18:14:55.312765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:39.028 [2024-10-28 18:14:55.312790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:24:39.028 [2024-10-28 18:14:55.312808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.028 [2024-10-28 18:14:55.312861] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:39.028 [2024-10-28 18:14:55.316266] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.028 [2024-10-28 18:14:55.316303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:39.028 [2024-10-28 18:14:55.316326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.365 ms 00:24:39.028 [2024-10-28 18:14:55.316341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.028 [2024-10-28 18:14:55.317793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.028 [2024-10-28 18:14:55.317851] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:39.028 [2024-10-28 18:14:55.317877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.395 ms 00:24:39.028 [2024-10-28 18:14:55.317892] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.028 [2024-10-28 18:14:55.333957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.028 [2024-10-28 18:14:55.334018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:39.028 [2024-10-28 18:14:55.334044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.024 ms 00:24:39.028 [2024-10-28 18:14:55.334069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.028 [2024-10-28 18:14:55.340879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.028 [2024-10-28 18:14:55.340923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:39.028 [2024-10-28 18:14:55.340946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.693 ms 00:24:39.028 [2024-10-28 18:14:55.340961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.028 [2024-10-28 18:14:55.372429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.028 [2024-10-28 18:14:55.372487] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:39.028 [2024-10-28 18:14:55.372512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.300 ms 00:24:39.028 [2024-10-28 18:14:55.372527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.029 [2024-10-28 18:14:55.391358] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.029 [2024-10-28 18:14:55.391429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:39.029 [2024-10-28 18:14:55.391456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.705 ms 00:24:39.029 [2024-10-28 18:14:55.391475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.029 [2024-10-28 18:14:55.391729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.029 [2024-10-28 18:14:55.391767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:39.029 [2024-10-28 18:14:55.391789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.149 ms 00:24:39.029 [2024-10-28 18:14:55.391805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.029 [2024-10-28 18:14:55.423357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.029 [2024-10-28 18:14:55.423416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:39.029 [2024-10-28 18:14:55.423440] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.515 ms 00:24:39.029 [2024-10-28 18:14:55.423455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.029 [2024-10-28 18:14:55.454796] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.029 [2024-10-28 18:14:55.454867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:39.029 [2024-10-28 18:14:55.454894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.231 ms 00:24:39.029 [2024-10-28 18:14:55.454919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.029 [2024-10-28 18:14:55.485983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.029 [2024-10-28 18:14:55.486041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:39.029 [2024-10-28 18:14:55.486067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.986 ms 00:24:39.029 [2024-10-28 18:14:55.486082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.287 [2024-10-28 18:14:55.517148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.287 [2024-10-28 18:14:55.517206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:39.287 [2024-10-28 18:14:55.517232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.855 ms 00:24:39.287 [2024-10-28 18:14:55.517247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.287 [2024-10-28 18:14:55.517350] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:39.287 [2024-10-28 18:14:55.517389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:24:39.287 [2024-10-28 18:14:55.517410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:39.287 [2024-10-28 18:14:55.517426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:39.287 [2024-10-28 18:14:55.517445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:39.287 [2024-10-28 18:14:55.517460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:39.287 [2024-10-28 18:14:55.517478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:39.287 [2024-10-28 18:14:55.517494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:39.287 [2024-10-28 18:14:55.517515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:39.287 [2024-10-28 18:14:55.517531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:39.287 [2024-10-28 18:14:55.517549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.517991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518146] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518195] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518332] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518731] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518971] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.518990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.519005] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.519023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.519039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.519057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.519073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.519092] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.519108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.519127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.519142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.519162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:39.288 [2024-10-28 18:14:55.519188] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:39.288 [2024-10-28 18:14:55.519206] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f41789-0530-4005-80e7-a5ffa4625272 00:24:39.289 [2024-10-28 18:14:55.519221] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:24:39.289 [2024-10-28 18:14:55.519239] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:24:39.289 [2024-10-28 18:14:55.519254] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:24:39.289 [2024-10-28 18:14:55.519274] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:24:39.289 [2024-10-28 18:14:55.519288] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:39.289 [2024-10-28 18:14:55.519314] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:39.289 [2024-10-28 18:14:55.519331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:39.289 [2024-10-28 18:14:55.519347] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:39.289 [2024-10-28 18:14:55.519360] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:39.289 [2024-10-28 18:14:55.519377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.289 [2024-10-28 18:14:55.519392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:39.289 [2024-10-28 18:14:55.519410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.037 ms 00:24:39.289 [2024-10-28 18:14:55.519424] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.289 [2024-10-28 18:14:55.536378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.289 [2024-10-28 18:14:55.536435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:39.289 [2024-10-28 18:14:55.536464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.865 ms 00:24:39.289 [2024-10-28 18:14:55.536480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.289 [2024-10-28 18:14:55.536971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:39.289 [2024-10-28 18:14:55.537003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:39.289 [2024-10-28 18:14:55.537023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.430 ms 00:24:39.289 [2024-10-28 18:14:55.537038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.289 [2024-10-28 18:14:55.592417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.289 [2024-10-28 18:14:55.592488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:39.289 [2024-10-28 18:14:55.592513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.289 [2024-10-28 18:14:55.592529] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.289 [2024-10-28 18:14:55.592621] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.289 [2024-10-28 18:14:55.592639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:39.289 [2024-10-28 18:14:55.592656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.289 [2024-10-28 18:14:55.592671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.289 [2024-10-28 18:14:55.592825] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.289 [2024-10-28 18:14:55.592872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:39.289 [2024-10-28 18:14:55.592897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.289 [2024-10-28 18:14:55.592912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.289 [2024-10-28 18:14:55.592950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.289 [2024-10-28 18:14:55.592967] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:39.289 [2024-10-28 18:14:55.592985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.289 [2024-10-28 18:14:55.592999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.289 [2024-10-28 18:14:55.697489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.289 [2024-10-28 18:14:55.697564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:39.289 [2024-10-28 18:14:55.697590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.289 [2024-10-28 18:14:55.697606] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.548 [2024-10-28 18:14:55.782765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.548 [2024-10-28 18:14:55.782856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:39.548 [2024-10-28 18:14:55.782884] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.548 [2024-10-28 18:14:55.782900] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.548 [2024-10-28 18:14:55.783063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.548 [2024-10-28 18:14:55.783093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:39.548 [2024-10-28 18:14:55.783113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.548 [2024-10-28 18:14:55.783131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.548 [2024-10-28 18:14:55.783222] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.548 [2024-10-28 18:14:55.783252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:39.548 [2024-10-28 18:14:55.783272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.548 [2024-10-28 18:14:55.783287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.548 [2024-10-28 18:14:55.783424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.548 [2024-10-28 18:14:55.783456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:39.548 [2024-10-28 18:14:55.783476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.548 [2024-10-28 18:14:55.783491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.548 [2024-10-28 18:14:55.783558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.548 [2024-10-28 18:14:55.783579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:39.548 [2024-10-28 18:14:55.783597] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.548 [2024-10-28 18:14:55.783612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.548 [2024-10-28 18:14:55.783665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.548 [2024-10-28 18:14:55.783694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:39.548 [2024-10-28 18:14:55.783713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.548 [2024-10-28 18:14:55.783727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.548 [2024-10-28 18:14:55.783807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:39.548 [2024-10-28 18:14:55.783828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:39.548 [2024-10-28 18:14:55.783869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:39.548 [2024-10-28 18:14:55.783885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:39.548 [2024-10-28 18:14:55.784055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 471.333 ms, result 0 00:24:39.548 true 00:24:39.548 18:14:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 78217 00:24:39.548 18:14:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid78217 00:24:39.548 18:14:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:24:39.548 [2024-10-28 18:14:55.896761] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:24:39.548 [2024-10-28 18:14:55.896960] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79121 ] 00:24:39.806 [2024-10-28 18:14:56.077184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:39.806 [2024-10-28 18:14:56.203651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:41.180  [2024-10-28T18:14:58.589Z] Copying: 164/1024 [MB] (164 MBps) [2024-10-28T18:14:59.522Z] Copying: 330/1024 [MB] (166 MBps) [2024-10-28T18:15:00.895Z] Copying: 492/1024 [MB] (161 MBps) [2024-10-28T18:15:01.828Z] Copying: 657/1024 [MB] (165 MBps) [2024-10-28T18:15:02.763Z] Copying: 823/1024 [MB] (166 MBps) [2024-10-28T18:15:02.763Z] Copying: 989/1024 [MB] (166 MBps) [2024-10-28T18:15:03.696Z] Copying: 1024/1024 [MB] (average 164 MBps) 00:24:47.218 00:24:47.477 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 78217 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:24:47.477 18:15:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:24:47.477 [2024-10-28 18:15:03.795693] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:24:47.477 [2024-10-28 18:15:03.795891] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79199 ] 00:24:47.735 [2024-10-28 18:15:03.982600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.735 [2024-10-28 18:15:04.108021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.994 [2024-10-28 18:15:04.443295] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:47.994 [2024-10-28 18:15:04.443389] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:48.252 [2024-10-28 18:15:04.510246] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:24:48.252 [2024-10-28 18:15:04.510691] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:24:48.252 [2024-10-28 18:15:04.511069] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:24:48.511 [2024-10-28 18:15:04.762008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.511 [2024-10-28 18:15:04.762071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:48.511 [2024-10-28 18:15:04.762092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:24:48.511 [2024-10-28 18:15:04.762104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.511 [2024-10-28 18:15:04.762177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.511 [2024-10-28 18:15:04.762196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:48.511 [2024-10-28 18:15:04.762209] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:48.511 [2024-10-28 18:15:04.762219] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.511 [2024-10-28 18:15:04.762251] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:48.511 [2024-10-28 18:15:04.763193] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:48.511 [2024-10-28 18:15:04.763221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.511 [2024-10-28 18:15:04.763234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:48.511 [2024-10-28 18:15:04.763246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.976 ms 00:24:48.511 [2024-10-28 18:15:04.763257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.511 [2024-10-28 18:15:04.764391] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:48.511 [2024-10-28 18:15:04.781087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.511 [2024-10-28 18:15:04.781141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:48.511 [2024-10-28 18:15:04.781158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.698 ms 00:24:48.511 [2024-10-28 18:15:04.781170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.511 [2024-10-28 18:15:04.781296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.511 [2024-10-28 18:15:04.781318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:48.511 [2024-10-28 18:15:04.781330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:24:48.511 [2024-10-28 18:15:04.781341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.511 [2024-10-28 18:15:04.785751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.511 [2024-10-28 18:15:04.785794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:48.511 [2024-10-28 18:15:04.785810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.312 ms 00:24:48.511 [2024-10-28 18:15:04.785821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.511 [2024-10-28 18:15:04.785938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.511 [2024-10-28 18:15:04.785960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:48.511 [2024-10-28 18:15:04.785973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:48.511 [2024-10-28 18:15:04.785984] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.511 [2024-10-28 18:15:04.786055] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.511 [2024-10-28 18:15:04.786079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:48.511 [2024-10-28 18:15:04.786091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:48.511 [2024-10-28 18:15:04.786102] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.511 [2024-10-28 18:15:04.786137] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:48.511 [2024-10-28 18:15:04.790588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.511 [2024-10-28 18:15:04.790627] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:48.511 [2024-10-28 18:15:04.790655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.461 ms 00:24:48.511 [2024-10-28 18:15:04.790670] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.511 [2024-10-28 18:15:04.790711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.511 [2024-10-28 18:15:04.790726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:48.511 [2024-10-28 18:15:04.790739] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:48.511 [2024-10-28 18:15:04.790772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.511 [2024-10-28 18:15:04.790824] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:48.512 [2024-10-28 18:15:04.790895] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:48.512 [2024-10-28 18:15:04.790979] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:48.512 [2024-10-28 18:15:04.791022] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:48.512 [2024-10-28 18:15:04.791162] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:48.512 [2024-10-28 18:15:04.791183] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:48.512 [2024-10-28 18:15:04.791197] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:48.512 [2024-10-28 18:15:04.791212] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:48.512 [2024-10-28 18:15:04.791245] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:48.512 [2024-10-28 18:15:04.791266] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:48.512 [2024-10-28 18:15:04.791277] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:48.512 [2024-10-28 18:15:04.791288] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:48.512 [2024-10-28 18:15:04.791298] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:48.512 [2024-10-28 18:15:04.791311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.512 [2024-10-28 18:15:04.791323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:48.512 [2024-10-28 18:15:04.791352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.491 ms 00:24:48.512 [2024-10-28 18:15:04.791371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.512 [2024-10-28 18:15:04.791489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.512 [2024-10-28 18:15:04.791518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:48.512 [2024-10-28 18:15:04.791531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.085 ms 00:24:48.512 [2024-10-28 18:15:04.791542] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.512 [2024-10-28 18:15:04.791692] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:48.512 [2024-10-28 18:15:04.791721] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:48.512 [2024-10-28 18:15:04.791734] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:48.512 [2024-10-28 18:15:04.791746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.512 [2024-10-28 18:15:04.791758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:48.512 [2024-10-28 18:15:04.791768] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:48.512 [2024-10-28 18:15:04.791779] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:48.512 [2024-10-28 18:15:04.791789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:48.512 [2024-10-28 18:15:04.791799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:48.512 [2024-10-28 18:15:04.791809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:48.512 [2024-10-28 18:15:04.791819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:48.512 [2024-10-28 18:15:04.791862] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:48.512 [2024-10-28 18:15:04.791873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:48.512 [2024-10-28 18:15:04.791884] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:48.512 [2024-10-28 18:15:04.791895] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:48.512 [2024-10-28 18:15:04.791905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.512 [2024-10-28 18:15:04.791916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:48.512 [2024-10-28 18:15:04.791957] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:48.512 [2024-10-28 18:15:04.791972] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.512 [2024-10-28 18:15:04.791983] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:48.512 [2024-10-28 18:15:04.791994] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:48.512 [2024-10-28 18:15:04.792004] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.512 [2024-10-28 18:15:04.792014] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:48.512 [2024-10-28 18:15:04.792038] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:48.512 [2024-10-28 18:15:04.792050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.512 [2024-10-28 18:15:04.792061] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:48.512 [2024-10-28 18:15:04.792071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:48.512 [2024-10-28 18:15:04.792081] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.512 [2024-10-28 18:15:04.792091] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:48.512 [2024-10-28 18:15:04.792101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:48.512 [2024-10-28 18:15:04.792112] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:48.512 [2024-10-28 18:15:04.792138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:48.512 [2024-10-28 18:15:04.792152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:48.512 [2024-10-28 18:15:04.792162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:48.512 [2024-10-28 18:15:04.792173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:48.512 [2024-10-28 18:15:04.792183] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:48.512 [2024-10-28 18:15:04.792193] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:48.512 [2024-10-28 18:15:04.792203] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:48.512 [2024-10-28 18:15:04.792213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:48.512 [2024-10-28 18:15:04.792237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.512 [2024-10-28 18:15:04.792249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:48.512 [2024-10-28 18:15:04.792259] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:48.512 [2024-10-28 18:15:04.792269] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.512 [2024-10-28 18:15:04.792279] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:48.512 [2024-10-28 18:15:04.792291] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:48.512 [2024-10-28 18:15:04.792301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:48.512 [2024-10-28 18:15:04.792337] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:48.512 [2024-10-28 18:15:04.792351] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:48.512 [2024-10-28 18:15:04.792362] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:48.512 [2024-10-28 18:15:04.792373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:48.512 [2024-10-28 18:15:04.792384] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:48.512 [2024-10-28 18:15:04.792394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:48.512 [2024-10-28 18:15:04.792405] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:48.512 [2024-10-28 18:15:04.792431] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:48.512 [2024-10-28 18:15:04.792448] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:48.512 [2024-10-28 18:15:04.792460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:48.512 [2024-10-28 18:15:04.792472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:48.512 [2024-10-28 18:15:04.792483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:48.512 [2024-10-28 18:15:04.792494] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:48.512 [2024-10-28 18:15:04.792505] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:48.512 [2024-10-28 18:15:04.792533] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:48.512 [2024-10-28 18:15:04.792546] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:48.512 [2024-10-28 18:15:04.792558] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:48.512 [2024-10-28 18:15:04.792568] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:48.512 [2024-10-28 18:15:04.792579] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:48.512 [2024-10-28 18:15:04.792591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:48.512 [2024-10-28 18:15:04.792602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:48.512 [2024-10-28 18:15:04.792623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:48.512 [2024-10-28 18:15:04.792637] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:48.512 [2024-10-28 18:15:04.792648] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:48.512 [2024-10-28 18:15:04.792660] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:48.512 [2024-10-28 18:15:04.792672] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:48.512 [2024-10-28 18:15:04.792684] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:48.512 [2024-10-28 18:15:04.792694] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:48.512 [2024-10-28 18:15:04.792706] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:48.512 [2024-10-28 18:15:04.792740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.512 [2024-10-28 18:15:04.792754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:48.512 [2024-10-28 18:15:04.792766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:24:48.512 [2024-10-28 18:15:04.792777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.512 [2024-10-28 18:15:04.826037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.512 [2024-10-28 18:15:04.826102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:48.512 [2024-10-28 18:15:04.826123] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.163 ms 00:24:48.512 [2024-10-28 18:15:04.826135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.513 [2024-10-28 18:15:04.826260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.513 [2024-10-28 18:15:04.826276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:48.513 [2024-10-28 18:15:04.826289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:24:48.513 [2024-10-28 18:15:04.826300] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.513 [2024-10-28 18:15:04.886710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.513 [2024-10-28 18:15:04.886768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:48.513 [2024-10-28 18:15:04.886793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.314 ms 00:24:48.513 [2024-10-28 18:15:04.886805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.513 [2024-10-28 18:15:04.886892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.513 [2024-10-28 18:15:04.886911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:48.513 [2024-10-28 18:15:04.886924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:48.513 [2024-10-28 18:15:04.886934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.513 [2024-10-28 18:15:04.887384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.513 [2024-10-28 18:15:04.887415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:48.513 [2024-10-28 18:15:04.887438] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.328 ms 00:24:48.513 [2024-10-28 18:15:04.887471] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.513 [2024-10-28 18:15:04.887652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.513 [2024-10-28 18:15:04.887674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:48.513 [2024-10-28 18:15:04.887697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.147 ms 00:24:48.513 [2024-10-28 18:15:04.887713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.513 [2024-10-28 18:15:04.906935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.513 [2024-10-28 18:15:04.907017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:48.513 [2024-10-28 18:15:04.907038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.181 ms 00:24:48.513 [2024-10-28 18:15:04.907051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.513 [2024-10-28 18:15:04.924728] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:24:48.513 [2024-10-28 18:15:04.924823] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:48.513 [2024-10-28 18:15:04.924870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.513 [2024-10-28 18:15:04.924889] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:48.513 [2024-10-28 18:15:04.924925] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.624 ms 00:24:48.513 [2024-10-28 18:15:04.924955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.513 [2024-10-28 18:15:04.959488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.513 [2024-10-28 18:15:04.959585] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:48.513 [2024-10-28 18:15:04.959631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.376 ms 00:24:48.513 [2024-10-28 18:15:04.959645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.513 [2024-10-28 18:15:04.980066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.513 [2024-10-28 18:15:04.980164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:48.513 [2024-10-28 18:15:04.980197] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.263 ms 00:24:48.513 [2024-10-28 18:15:04.980218] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.772 [2024-10-28 18:15:05.007489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.772 [2024-10-28 18:15:05.007569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:48.772 [2024-10-28 18:15:05.007593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.100 ms 00:24:48.772 [2024-10-28 18:15:05.007608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.772 [2024-10-28 18:15:05.008701] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.772 [2024-10-28 18:15:05.008743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:48.772 [2024-10-28 18:15:05.008761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.828 ms 00:24:48.772 [2024-10-28 18:15:05.008774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.772 [2024-10-28 18:15:05.134326] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.772 [2024-10-28 18:15:05.134429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:48.772 [2024-10-28 18:15:05.134465] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 125.516 ms 00:24:48.773 [2024-10-28 18:15:05.134489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.773 [2024-10-28 18:15:05.150614] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:48.773 [2024-10-28 18:15:05.153871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.773 [2024-10-28 18:15:05.153926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:48.773 [2024-10-28 18:15:05.153962] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.238 ms 00:24:48.773 [2024-10-28 18:15:05.154001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.773 [2024-10-28 18:15:05.154239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.773 [2024-10-28 18:15:05.154275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:48.773 [2024-10-28 18:15:05.154303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:48.773 [2024-10-28 18:15:05.154326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.773 [2024-10-28 18:15:05.154490] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.773 [2024-10-28 18:15:05.154526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:48.773 [2024-10-28 18:15:05.154552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.057 ms 00:24:48.773 [2024-10-28 18:15:05.154574] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.773 [2024-10-28 18:15:05.154646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.773 [2024-10-28 18:15:05.154676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:48.773 [2024-10-28 18:15:05.154702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:24:48.773 [2024-10-28 18:15:05.154724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.773 [2024-10-28 18:15:05.154806] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:48.773 [2024-10-28 18:15:05.154860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.773 [2024-10-28 18:15:05.154887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:48.773 [2024-10-28 18:15:05.154911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:24:48.773 [2024-10-28 18:15:05.154959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.773 [2024-10-28 18:15:05.194849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.773 [2024-10-28 18:15:05.194941] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:48.773 [2024-10-28 18:15:05.195003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.814 ms 00:24:48.773 [2024-10-28 18:15:05.195026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.773 [2024-10-28 18:15:05.195236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:48.773 [2024-10-28 18:15:05.195284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:48.773 [2024-10-28 18:15:05.195315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.079 ms 00:24:48.773 [2024-10-28 18:15:05.195340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:48.773 [2024-10-28 18:15:05.197131] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 434.455 ms, result 0 00:24:50.148  [2024-10-28T18:15:07.561Z] Copying: 26/1024 [MB] (26 MBps) [2024-10-28T18:15:08.496Z] Copying: 51/1024 [MB] (25 MBps) [2024-10-28T18:15:09.464Z] Copying: 77/1024 [MB] (26 MBps) [2024-10-28T18:15:10.412Z] Copying: 102/1024 [MB] (24 MBps) [2024-10-28T18:15:11.347Z] Copying: 127/1024 [MB] (25 MBps) [2024-10-28T18:15:12.281Z] Copying: 155/1024 [MB] (27 MBps) [2024-10-28T18:15:13.214Z] Copying: 182/1024 [MB] (27 MBps) [2024-10-28T18:15:14.586Z] Copying: 208/1024 [MB] (26 MBps) [2024-10-28T18:15:15.517Z] Copying: 235/1024 [MB] (26 MBps) [2024-10-28T18:15:16.451Z] Copying: 260/1024 [MB] (25 MBps) [2024-10-28T18:15:17.384Z] Copying: 287/1024 [MB] (26 MBps) [2024-10-28T18:15:18.318Z] Copying: 313/1024 [MB] (26 MBps) [2024-10-28T18:15:19.252Z] Copying: 340/1024 [MB] (26 MBps) [2024-10-28T18:15:20.625Z] Copying: 366/1024 [MB] (26 MBps) [2024-10-28T18:15:21.219Z] Copying: 393/1024 [MB] (27 MBps) [2024-10-28T18:15:22.592Z] Copying: 419/1024 [MB] (25 MBps) [2024-10-28T18:15:23.525Z] Copying: 446/1024 [MB] (26 MBps) [2024-10-28T18:15:24.459Z] Copying: 473/1024 [MB] (27 MBps) [2024-10-28T18:15:25.392Z] Copying: 500/1024 [MB] (26 MBps) [2024-10-28T18:15:26.328Z] Copying: 527/1024 [MB] (27 MBps) [2024-10-28T18:15:27.263Z] Copying: 546/1024 [MB] (18 MBps) [2024-10-28T18:15:28.636Z] Copying: 572/1024 [MB] (26 MBps) [2024-10-28T18:15:29.570Z] Copying: 599/1024 [MB] (27 MBps) [2024-10-28T18:15:30.504Z] Copying: 626/1024 [MB] (26 MBps) [2024-10-28T18:15:31.437Z] Copying: 652/1024 [MB] (26 MBps) [2024-10-28T18:15:32.388Z] Copying: 679/1024 [MB] (26 MBps) [2024-10-28T18:15:33.321Z] Copying: 705/1024 [MB] (26 MBps) [2024-10-28T18:15:34.255Z] Copying: 733/1024 [MB] (27 MBps) [2024-10-28T18:15:35.628Z] Copying: 760/1024 [MB] (26 MBps) [2024-10-28T18:15:36.561Z] Copying: 787/1024 [MB] (27 MBps) [2024-10-28T18:15:37.496Z] Copying: 815/1024 [MB] (27 MBps) [2024-10-28T18:15:38.430Z] Copying: 841/1024 [MB] (26 MBps) [2024-10-28T18:15:39.365Z] Copying: 866/1024 [MB] (25 MBps) [2024-10-28T18:15:40.298Z] Copying: 891/1024 [MB] (24 MBps) [2024-10-28T18:15:41.232Z] Copying: 916/1024 [MB] (24 MBps) [2024-10-28T18:15:42.606Z] Copying: 942/1024 [MB] (26 MBps) [2024-10-28T18:15:43.540Z] Copying: 970/1024 [MB] (27 MBps) [2024-10-28T18:15:44.484Z] Copying: 997/1024 [MB] (26 MBps) [2024-10-28T18:15:45.416Z] Copying: 1023/1024 [MB] (26 MBps) [2024-10-28T18:15:45.416Z] Copying: 1048528/1048576 [kB] (816 kBps) [2024-10-28T18:15:45.416Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-10-28 18:15:45.281625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.938 [2024-10-28 18:15:45.281708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:28.938 [2024-10-28 18:15:45.281733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:28.938 [2024-10-28 18:15:45.281747] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.938 [2024-10-28 18:15:45.285473] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:28.938 [2024-10-28 18:15:45.292023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.938 [2024-10-28 18:15:45.292077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:28.938 [2024-10-28 18:15:45.292097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.490 ms 00:25:28.938 [2024-10-28 18:15:45.292119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.938 [2024-10-28 18:15:45.302952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.938 [2024-10-28 18:15:45.303028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:28.938 [2024-10-28 18:15:45.303048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.562 ms 00:25:28.938 [2024-10-28 18:15:45.303060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.938 [2024-10-28 18:15:45.322672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.938 [2024-10-28 18:15:45.322753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:28.938 [2024-10-28 18:15:45.322776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.583 ms 00:25:28.938 [2024-10-28 18:15:45.322788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.938 [2024-10-28 18:15:45.330037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.938 [2024-10-28 18:15:45.330102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:28.938 [2024-10-28 18:15:45.330130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.117 ms 00:25:28.938 [2024-10-28 18:15:45.330153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.938 [2024-10-28 18:15:45.366669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.938 [2024-10-28 18:15:45.366760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:28.938 [2024-10-28 18:15:45.366783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 36.373 ms 00:25:28.938 [2024-10-28 18:15:45.366794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:28.938 [2024-10-28 18:15:45.387251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:28.938 [2024-10-28 18:15:45.387343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:28.938 [2024-10-28 18:15:45.387367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.342 ms 00:25:28.938 [2024-10-28 18:15:45.387379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.196 [2024-10-28 18:15:45.448679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.196 [2024-10-28 18:15:45.448763] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:29.196 [2024-10-28 18:15:45.448800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 61.133 ms 00:25:29.196 [2024-10-28 18:15:45.448813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.196 [2024-10-28 18:15:45.482554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.196 [2024-10-28 18:15:45.482647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:29.196 [2024-10-28 18:15:45.482669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.699 ms 00:25:29.196 [2024-10-28 18:15:45.482681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.196 [2024-10-28 18:15:45.515805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.196 [2024-10-28 18:15:45.515886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:29.196 [2024-10-28 18:15:45.515917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.025 ms 00:25:29.196 [2024-10-28 18:15:45.515929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.196 [2024-10-28 18:15:45.548314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.196 [2024-10-28 18:15:45.548387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:29.196 [2024-10-28 18:15:45.548406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.309 ms 00:25:29.196 [2024-10-28 18:15:45.548417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.196 [2024-10-28 18:15:45.580332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.196 [2024-10-28 18:15:45.580409] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:29.196 [2024-10-28 18:15:45.580429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.768 ms 00:25:29.196 [2024-10-28 18:15:45.580440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.197 [2024-10-28 18:15:45.580521] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:29.197 [2024-10-28 18:15:45.580549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 95488 / 261120 wr_cnt: 1 state: open 00:25:29.197 [2024-10-28 18:15:45.580563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580662] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.580984] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581007] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581031] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581403] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581784] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.581987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:29.197 [2024-10-28 18:15:45.582347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:29.198 [2024-10-28 18:15:45.582367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:29.198 [2024-10-28 18:15:45.582379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:29.198 [2024-10-28 18:15:45.582391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:29.198 [2024-10-28 18:15:45.582402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:29.198 [2024-10-28 18:15:45.582414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:29.198 [2024-10-28 18:15:45.582433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:29.198 [2024-10-28 18:15:45.582454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:29.198 [2024-10-28 18:15:45.582477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:29.198 [2024-10-28 18:15:45.582516] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:29.198 [2024-10-28 18:15:45.582540] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f41789-0530-4005-80e7-a5ffa4625272 00:25:29.198 [2024-10-28 18:15:45.582567] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 95488 00:25:29.198 [2024-10-28 18:15:45.582578] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 96448 00:25:29.198 [2024-10-28 18:15:45.582615] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 95488 00:25:29.198 [2024-10-28 18:15:45.582639] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0101 00:25:29.198 [2024-10-28 18:15:45.582661] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:29.198 [2024-10-28 18:15:45.582685] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:29.198 [2024-10-28 18:15:45.582705] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:29.198 [2024-10-28 18:15:45.582725] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:29.198 [2024-10-28 18:15:45.582744] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:29.198 [2024-10-28 18:15:45.582758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.198 [2024-10-28 18:15:45.582770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:29.198 [2024-10-28 18:15:45.582782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.239 ms 00:25:29.198 [2024-10-28 18:15:45.582797] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.198 [2024-10-28 18:15:45.599932] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.198 [2024-10-28 18:15:45.600002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:29.198 [2024-10-28 18:15:45.600022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.979 ms 00:25:29.198 [2024-10-28 18:15:45.600033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.198 [2024-10-28 18:15:45.600576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:29.198 [2024-10-28 18:15:45.600615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:29.198 [2024-10-28 18:15:45.600630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.482 ms 00:25:29.198 [2024-10-28 18:15:45.600653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.198 [2024-10-28 18:15:45.644152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.198 [2024-10-28 18:15:45.644234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:29.198 [2024-10-28 18:15:45.644265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.198 [2024-10-28 18:15:45.644278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.198 [2024-10-28 18:15:45.644372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.198 [2024-10-28 18:15:45.644388] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:29.198 [2024-10-28 18:15:45.644400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.198 [2024-10-28 18:15:45.644416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.198 [2024-10-28 18:15:45.644551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.198 [2024-10-28 18:15:45.644588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:29.198 [2024-10-28 18:15:45.644614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.198 [2024-10-28 18:15:45.644635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.198 [2024-10-28 18:15:45.644672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.198 [2024-10-28 18:15:45.644696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:29.198 [2024-10-28 18:15:45.644715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.198 [2024-10-28 18:15:45.644735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.456 [2024-10-28 18:15:45.751967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.456 [2024-10-28 18:15:45.752046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:29.456 [2024-10-28 18:15:45.752067] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.456 [2024-10-28 18:15:45.752080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.456 [2024-10-28 18:15:45.845576] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.456 [2024-10-28 18:15:45.845655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:29.456 [2024-10-28 18:15:45.845676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.456 [2024-10-28 18:15:45.845701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.456 [2024-10-28 18:15:45.845814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.456 [2024-10-28 18:15:45.845869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:29.456 [2024-10-28 18:15:45.845903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.456 [2024-10-28 18:15:45.845927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.456 [2024-10-28 18:15:45.846006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.456 [2024-10-28 18:15:45.846038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:29.456 [2024-10-28 18:15:45.846061] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.456 [2024-10-28 18:15:45.846079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.456 [2024-10-28 18:15:45.846270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.456 [2024-10-28 18:15:45.846314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:29.456 [2024-10-28 18:15:45.846341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.456 [2024-10-28 18:15:45.846362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.456 [2024-10-28 18:15:45.846458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.456 [2024-10-28 18:15:45.846497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:29.456 [2024-10-28 18:15:45.846521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.456 [2024-10-28 18:15:45.846543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.456 [2024-10-28 18:15:45.846619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.456 [2024-10-28 18:15:45.846646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:29.456 [2024-10-28 18:15:45.846670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.456 [2024-10-28 18:15:45.846697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.456 [2024-10-28 18:15:45.846773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:29.456 [2024-10-28 18:15:45.846803] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:29.456 [2024-10-28 18:15:45.846826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:29.456 [2024-10-28 18:15:45.846888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:29.456 [2024-10-28 18:15:45.847116] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 567.163 ms, result 0 00:25:30.830 00:25:30.830 00:25:30.830 18:15:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:25:33.357 18:15:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:33.357 [2024-10-28 18:15:49.517875] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:25:33.357 [2024-10-28 18:15:49.518262] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79647 ] 00:25:33.357 [2024-10-28 18:15:49.710575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:33.615 [2024-10-28 18:15:49.852959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.873 [2024-10-28 18:15:50.198321] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:33.873 [2024-10-28 18:15:50.198407] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:25:34.132 [2024-10-28 18:15:50.365344] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.132 [2024-10-28 18:15:50.365447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:34.133 [2024-10-28 18:15:50.365491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:25:34.133 [2024-10-28 18:15:50.365511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.365617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.133 [2024-10-28 18:15:50.365645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:34.133 [2024-10-28 18:15:50.365675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:25:34.133 [2024-10-28 18:15:50.365695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.365747] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:34.133 [2024-10-28 18:15:50.367176] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:34.133 [2024-10-28 18:15:50.367236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.133 [2024-10-28 18:15:50.367260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:34.133 [2024-10-28 18:15:50.367284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.499 ms 00:25:34.133 [2024-10-28 18:15:50.367305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.368753] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:25:34.133 [2024-10-28 18:15:50.386697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.133 [2024-10-28 18:15:50.386774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:25:34.133 [2024-10-28 18:15:50.386796] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.942 ms 00:25:34.133 [2024-10-28 18:15:50.386808] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.386975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.133 [2024-10-28 18:15:50.386999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:25:34.133 [2024-10-28 18:15:50.387020] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:34.133 [2024-10-28 18:15:50.387043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.392245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.133 [2024-10-28 18:15:50.392312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:34.133 [2024-10-28 18:15:50.392332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.021 ms 00:25:34.133 [2024-10-28 18:15:50.392344] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.392471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.133 [2024-10-28 18:15:50.392491] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:34.133 [2024-10-28 18:15:50.392504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:25:34.133 [2024-10-28 18:15:50.392515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.392591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.133 [2024-10-28 18:15:50.392608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:34.133 [2024-10-28 18:15:50.392620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:34.133 [2024-10-28 18:15:50.392631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.392666] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:34.133 [2024-10-28 18:15:50.397516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.133 [2024-10-28 18:15:50.397573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:34.133 [2024-10-28 18:15:50.397599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.860 ms 00:25:34.133 [2024-10-28 18:15:50.397630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.397696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.133 [2024-10-28 18:15:50.397727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:34.133 [2024-10-28 18:15:50.397753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:25:34.133 [2024-10-28 18:15:50.397772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.397907] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:25:34.133 [2024-10-28 18:15:50.397946] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:25:34.133 [2024-10-28 18:15:50.397990] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:25:34.133 [2024-10-28 18:15:50.398013] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:25:34.133 [2024-10-28 18:15:50.398127] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:34.133 [2024-10-28 18:15:50.398142] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:34.133 [2024-10-28 18:15:50.398157] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:34.133 [2024-10-28 18:15:50.398171] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:34.133 [2024-10-28 18:15:50.398185] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:34.133 [2024-10-28 18:15:50.398197] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:34.133 [2024-10-28 18:15:50.398208] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:34.133 [2024-10-28 18:15:50.398218] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:34.133 [2024-10-28 18:15:50.398228] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:34.133 [2024-10-28 18:15:50.398245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.133 [2024-10-28 18:15:50.398256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:34.133 [2024-10-28 18:15:50.398268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.343 ms 00:25:34.133 [2024-10-28 18:15:50.398279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.398381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.133 [2024-10-28 18:15:50.398403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:34.133 [2024-10-28 18:15:50.398415] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:34.133 [2024-10-28 18:15:50.398427] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.133 [2024-10-28 18:15:50.398550] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:34.133 [2024-10-28 18:15:50.398574] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:34.133 [2024-10-28 18:15:50.398587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.133 [2024-10-28 18:15:50.398598] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.133 [2024-10-28 18:15:50.398609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:34.133 [2024-10-28 18:15:50.398620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:34.133 [2024-10-28 18:15:50.398631] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:34.133 [2024-10-28 18:15:50.398641] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:34.133 [2024-10-28 18:15:50.398651] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:34.133 [2024-10-28 18:15:50.398661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.133 [2024-10-28 18:15:50.398671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:34.133 [2024-10-28 18:15:50.398681] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:34.133 [2024-10-28 18:15:50.398691] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:34.133 [2024-10-28 18:15:50.398701] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:34.133 [2024-10-28 18:15:50.398711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:34.133 [2024-10-28 18:15:50.398732] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.133 [2024-10-28 18:15:50.398743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:34.133 [2024-10-28 18:15:50.398753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:34.133 [2024-10-28 18:15:50.398763] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.133 [2024-10-28 18:15:50.398773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:34.133 [2024-10-28 18:15:50.398783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:34.133 [2024-10-28 18:15:50.398793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.133 [2024-10-28 18:15:50.398803] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:34.133 [2024-10-28 18:15:50.398813] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:34.133 [2024-10-28 18:15:50.398822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.133 [2024-10-28 18:15:50.398846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:34.133 [2024-10-28 18:15:50.398860] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:34.133 [2024-10-28 18:15:50.398870] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.133 [2024-10-28 18:15:50.398880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:34.133 [2024-10-28 18:15:50.398890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:34.133 [2024-10-28 18:15:50.398900] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:34.133 [2024-10-28 18:15:50.398910] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:34.133 [2024-10-28 18:15:50.398920] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:34.133 [2024-10-28 18:15:50.398930] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.133 [2024-10-28 18:15:50.398940] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:34.133 [2024-10-28 18:15:50.398950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:34.133 [2024-10-28 18:15:50.398961] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:34.133 [2024-10-28 18:15:50.398971] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:34.133 [2024-10-28 18:15:50.398981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:34.133 [2024-10-28 18:15:50.398991] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.133 [2024-10-28 18:15:50.399000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:34.133 [2024-10-28 18:15:50.399010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:34.133 [2024-10-28 18:15:50.399020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.133 [2024-10-28 18:15:50.399030] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:34.133 [2024-10-28 18:15:50.399041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:34.133 [2024-10-28 18:15:50.399051] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:34.133 [2024-10-28 18:15:50.399062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:34.134 [2024-10-28 18:15:50.399073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:34.134 [2024-10-28 18:15:50.399084] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:34.134 [2024-10-28 18:15:50.399094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:34.134 [2024-10-28 18:15:50.399104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:34.134 [2024-10-28 18:15:50.399114] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:34.134 [2024-10-28 18:15:50.399124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:34.134 [2024-10-28 18:15:50.399136] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:34.134 [2024-10-28 18:15:50.399164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.134 [2024-10-28 18:15:50.399178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:34.134 [2024-10-28 18:15:50.399189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:34.134 [2024-10-28 18:15:50.399200] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:34.134 [2024-10-28 18:15:50.399211] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:34.134 [2024-10-28 18:15:50.399223] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:34.134 [2024-10-28 18:15:50.399234] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:34.134 [2024-10-28 18:15:50.399245] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:34.134 [2024-10-28 18:15:50.399256] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:34.134 [2024-10-28 18:15:50.399267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:34.134 [2024-10-28 18:15:50.399278] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:34.134 [2024-10-28 18:15:50.399289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:34.134 [2024-10-28 18:15:50.399300] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:34.134 [2024-10-28 18:15:50.399311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:34.134 [2024-10-28 18:15:50.399323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:34.134 [2024-10-28 18:15:50.399335] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:34.134 [2024-10-28 18:15:50.399353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:34.134 [2024-10-28 18:15:50.399366] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:34.134 [2024-10-28 18:15:50.399377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:34.134 [2024-10-28 18:15:50.399389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:34.134 [2024-10-28 18:15:50.399400] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:34.134 [2024-10-28 18:15:50.399412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.399424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:34.134 [2024-10-28 18:15:50.399435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.934 ms 00:25:34.134 [2024-10-28 18:15:50.399446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.434268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.434339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:34.134 [2024-10-28 18:15:50.434361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 34.757 ms 00:25:34.134 [2024-10-28 18:15:50.434373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.434498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.434514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:34.134 [2024-10-28 18:15:50.434527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.066 ms 00:25:34.134 [2024-10-28 18:15:50.434538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.486431] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.486544] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:34.134 [2024-10-28 18:15:50.486582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 51.780 ms 00:25:34.134 [2024-10-28 18:15:50.486608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.486723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.486755] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:34.134 [2024-10-28 18:15:50.486779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:25:34.134 [2024-10-28 18:15:50.486810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.487378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.487427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:34.134 [2024-10-28 18:15:50.487453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.395 ms 00:25:34.134 [2024-10-28 18:15:50.487472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.487719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.487765] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:34.134 [2024-10-28 18:15:50.487787] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:25:34.134 [2024-10-28 18:15:50.487818] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.505553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.505619] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:34.134 [2024-10-28 18:15:50.505644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.665 ms 00:25:34.134 [2024-10-28 18:15:50.505656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.522313] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:25:34.134 [2024-10-28 18:15:50.522372] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:25:34.134 [2024-10-28 18:15:50.522393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.522405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:25:34.134 [2024-10-28 18:15:50.522420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.526 ms 00:25:34.134 [2024-10-28 18:15:50.522431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.552803] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.552907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:25:34.134 [2024-10-28 18:15:50.552929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.292 ms 00:25:34.134 [2024-10-28 18:15:50.552942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.571918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.572008] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:25:34.134 [2024-10-28 18:15:50.572029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.865 ms 00:25:34.134 [2024-10-28 18:15:50.572041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.589810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.589888] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:25:34.134 [2024-10-28 18:15:50.589909] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.685 ms 00:25:34.134 [2024-10-28 18:15:50.589920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.134 [2024-10-28 18:15:50.590955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.134 [2024-10-28 18:15:50.590996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:34.134 [2024-10-28 18:15:50.591012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.741 ms 00:25:34.134 [2024-10-28 18:15:50.591029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.393 [2024-10-28 18:15:50.667538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.393 [2024-10-28 18:15:50.667629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:25:34.393 [2024-10-28 18:15:50.667664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.478 ms 00:25:34.393 [2024-10-28 18:15:50.667676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.393 [2024-10-28 18:15:50.681007] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:34.393 [2024-10-28 18:15:50.683873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.393 [2024-10-28 18:15:50.683918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:34.393 [2024-10-28 18:15:50.683937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.101 ms 00:25:34.393 [2024-10-28 18:15:50.683948] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.393 [2024-10-28 18:15:50.684080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.393 [2024-10-28 18:15:50.684101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:25:34.393 [2024-10-28 18:15:50.684114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:34.393 [2024-10-28 18:15:50.684130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.393 [2024-10-28 18:15:50.685550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.393 [2024-10-28 18:15:50.685590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:34.393 [2024-10-28 18:15:50.685605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.349 ms 00:25:34.393 [2024-10-28 18:15:50.685615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.393 [2024-10-28 18:15:50.685655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.393 [2024-10-28 18:15:50.685671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:34.393 [2024-10-28 18:15:50.685687] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:34.393 [2024-10-28 18:15:50.685705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.393 [2024-10-28 18:15:50.685763] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:25:34.393 [2024-10-28 18:15:50.685789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.393 [2024-10-28 18:15:50.685808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:25:34.393 [2024-10-28 18:15:50.685826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:25:34.393 [2024-10-28 18:15:50.685853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.393 [2024-10-28 18:15:50.717414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.393 [2024-10-28 18:15:50.717505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:34.393 [2024-10-28 18:15:50.717527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.525 ms 00:25:34.393 [2024-10-28 18:15:50.717550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.393 [2024-10-28 18:15:50.717664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:34.393 [2024-10-28 18:15:50.717683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:34.393 [2024-10-28 18:15:50.717696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:25:34.393 [2024-10-28 18:15:50.717708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:34.393 [2024-10-28 18:15:50.720733] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 354.339 ms, result 0 00:25:35.766  [2024-10-28T18:15:53.206Z] Copying: 836/1048576 [kB] (836 kBps) [2024-10-28T18:15:54.140Z] Copying: 1640/1048576 [kB] (804 kBps) [2024-10-28T18:15:55.072Z] Copying: 5764/1048576 [kB] (4124 kBps) [2024-10-28T18:15:56.006Z] Copying: 32/1024 [MB] (27 MBps) [2024-10-28T18:15:56.977Z] Copying: 61/1024 [MB] (28 MBps) [2024-10-28T18:15:58.350Z] Copying: 91/1024 [MB] (29 MBps) [2024-10-28T18:15:59.286Z] Copying: 121/1024 [MB] (30 MBps) [2024-10-28T18:16:00.246Z] Copying: 151/1024 [MB] (30 MBps) [2024-10-28T18:16:01.185Z] Copying: 181/1024 [MB] (29 MBps) [2024-10-28T18:16:02.117Z] Copying: 212/1024 [MB] (30 MBps) [2024-10-28T18:16:03.050Z] Copying: 241/1024 [MB] (28 MBps) [2024-10-28T18:16:03.983Z] Copying: 269/1024 [MB] (28 MBps) [2024-10-28T18:16:05.378Z] Copying: 295/1024 [MB] (26 MBps) [2024-10-28T18:16:05.958Z] Copying: 325/1024 [MB] (29 MBps) [2024-10-28T18:16:07.330Z] Copying: 353/1024 [MB] (27 MBps) [2024-10-28T18:16:08.264Z] Copying: 381/1024 [MB] (27 MBps) [2024-10-28T18:16:09.196Z] Copying: 410/1024 [MB] (29 MBps) [2024-10-28T18:16:10.132Z] Copying: 440/1024 [MB] (29 MBps) [2024-10-28T18:16:11.067Z] Copying: 469/1024 [MB] (29 MBps) [2024-10-28T18:16:12.006Z] Copying: 495/1024 [MB] (26 MBps) [2024-10-28T18:16:13.380Z] Copying: 520/1024 [MB] (24 MBps) [2024-10-28T18:16:14.313Z] Copying: 547/1024 [MB] (27 MBps) [2024-10-28T18:16:15.247Z] Copying: 578/1024 [MB] (30 MBps) [2024-10-28T18:16:16.181Z] Copying: 600/1024 [MB] (22 MBps) [2024-10-28T18:16:17.115Z] Copying: 626/1024 [MB] (25 MBps) [2024-10-28T18:16:18.051Z] Copying: 650/1024 [MB] (24 MBps) [2024-10-28T18:16:18.985Z] Copying: 676/1024 [MB] (26 MBps) [2024-10-28T18:16:20.359Z] Copying: 706/1024 [MB] (29 MBps) [2024-10-28T18:16:21.292Z] Copying: 735/1024 [MB] (29 MBps) [2024-10-28T18:16:22.227Z] Copying: 766/1024 [MB] (30 MBps) [2024-10-28T18:16:23.159Z] Copying: 796/1024 [MB] (29 MBps) [2024-10-28T18:16:24.089Z] Copying: 824/1024 [MB] (28 MBps) [2024-10-28T18:16:25.033Z] Copying: 855/1024 [MB] (30 MBps) [2024-10-28T18:16:25.965Z] Copying: 885/1024 [MB] (30 MBps) [2024-10-28T18:16:27.341Z] Copying: 914/1024 [MB] (28 MBps) [2024-10-28T18:16:28.276Z] Copying: 942/1024 [MB] (28 MBps) [2024-10-28T18:16:29.210Z] Copying: 971/1024 [MB] (29 MBps) [2024-10-28T18:16:29.777Z] Copying: 1001/1024 [MB] (29 MBps) [2024-10-28T18:16:30.044Z] Copying: 1024/1024 [MB] (average 26 MBps)[2024-10-28 18:16:29.874074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.566 [2024-10-28 18:16:29.874161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:13.566 [2024-10-28 18:16:29.874202] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:13.566 [2024-10-28 18:16:29.874216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.566 [2024-10-28 18:16:29.874265] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:13.566 [2024-10-28 18:16:29.878965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.566 [2024-10-28 18:16:29.879016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:13.566 [2024-10-28 18:16:29.879035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.671 ms 00:26:13.566 [2024-10-28 18:16:29.879049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.566 [2024-10-28 18:16:29.879416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.566 [2024-10-28 18:16:29.879478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:13.566 [2024-10-28 18:16:29.879522] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.326 ms 00:26:13.566 [2024-10-28 18:16:29.879548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.566 [2024-10-28 18:16:29.893672] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.566 [2024-10-28 18:16:29.893749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:13.566 [2024-10-28 18:16:29.893772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.077 ms 00:26:13.566 [2024-10-28 18:16:29.893787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.566 [2024-10-28 18:16:29.902174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.566 [2024-10-28 18:16:29.902228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:13.566 [2024-10-28 18:16:29.902247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.322 ms 00:26:13.566 [2024-10-28 18:16:29.902273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.566 [2024-10-28 18:16:29.941474] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.566 [2024-10-28 18:16:29.941565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:13.566 [2024-10-28 18:16:29.941591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.103 ms 00:26:13.566 [2024-10-28 18:16:29.941605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.566 [2024-10-28 18:16:29.963712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.566 [2024-10-28 18:16:29.963787] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:13.566 [2024-10-28 18:16:29.963810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.022 ms 00:26:13.566 [2024-10-28 18:16:29.963824] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.566 [2024-10-28 18:16:29.965755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.566 [2024-10-28 18:16:29.965808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:13.567 [2024-10-28 18:16:29.965826] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.819 ms 00:26:13.567 [2024-10-28 18:16:29.965858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.567 [2024-10-28 18:16:30.005198] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.567 [2024-10-28 18:16:30.005282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:13.567 [2024-10-28 18:16:30.005305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.294 ms 00:26:13.567 [2024-10-28 18:16:30.005320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.824 [2024-10-28 18:16:30.044488] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.824 [2024-10-28 18:16:30.044568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:13.824 [2024-10-28 18:16:30.044609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.071 ms 00:26:13.824 [2024-10-28 18:16:30.044624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.824 [2024-10-28 18:16:30.083146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.824 [2024-10-28 18:16:30.083228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:13.824 [2024-10-28 18:16:30.083251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.440 ms 00:26:13.824 [2024-10-28 18:16:30.083265] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.824 [2024-10-28 18:16:30.121273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.824 [2024-10-28 18:16:30.121343] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:13.824 [2024-10-28 18:16:30.121366] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 37.855 ms 00:26:13.824 [2024-10-28 18:16:30.121379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.824 [2024-10-28 18:16:30.121446] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:13.824 [2024-10-28 18:16:30.121474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:26:13.824 [2024-10-28 18:16:30.121492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:26:13.824 [2024-10-28 18:16:30.121506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:13.824 [2024-10-28 18:16:30.121520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121661] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121763] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.121998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122758] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.122977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123084] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123160] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123418] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:13.825 [2024-10-28 18:16:30.123699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:13.826 [2024-10-28 18:16:30.123725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:13.826 [2024-10-28 18:16:30.123750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:13.826 [2024-10-28 18:16:30.123789] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:13.826 [2024-10-28 18:16:30.123816] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f41789-0530-4005-80e7-a5ffa4625272 00:26:13.826 [2024-10-28 18:16:30.123866] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:26:13.826 [2024-10-28 18:16:30.123895] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 169152 00:26:13.826 [2024-10-28 18:16:30.123920] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 167168 00:26:13.826 [2024-10-28 18:16:30.123957] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0119 00:26:13.826 [2024-10-28 18:16:30.123976] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:13.826 [2024-10-28 18:16:30.124001] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:13.826 [2024-10-28 18:16:30.124027] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:13.826 [2024-10-28 18:16:30.124074] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:13.826 [2024-10-28 18:16:30.124091] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:13.826 [2024-10-28 18:16:30.124114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.826 [2024-10-28 18:16:30.124137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:13.826 [2024-10-28 18:16:30.124162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.669 ms 00:26:13.826 [2024-10-28 18:16:30.124179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.826 [2024-10-28 18:16:30.144635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.826 [2024-10-28 18:16:30.144712] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:13.826 [2024-10-28 18:16:30.144734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.373 ms 00:26:13.826 [2024-10-28 18:16:30.144748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.826 [2024-10-28 18:16:30.145443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:13.826 [2024-10-28 18:16:30.145481] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:13.826 [2024-10-28 18:16:30.145499] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.645 ms 00:26:13.826 [2024-10-28 18:16:30.145512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.826 [2024-10-28 18:16:30.199824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.826 [2024-10-28 18:16:30.199913] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:13.826 [2024-10-28 18:16:30.199936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.826 [2024-10-28 18:16:30.199950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.826 [2024-10-28 18:16:30.200050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.826 [2024-10-28 18:16:30.200069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:13.826 [2024-10-28 18:16:30.200083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.826 [2024-10-28 18:16:30.200096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.826 [2024-10-28 18:16:30.200232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.826 [2024-10-28 18:16:30.200275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:13.826 [2024-10-28 18:16:30.200303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.826 [2024-10-28 18:16:30.200328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:13.826 [2024-10-28 18:16:30.200364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:13.826 [2024-10-28 18:16:30.200391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:13.826 [2024-10-28 18:16:30.200414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:13.826 [2024-10-28 18:16:30.200436] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.084 [2024-10-28 18:16:30.327681] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.084 [2024-10-28 18:16:30.327758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:14.084 [2024-10-28 18:16:30.327780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.084 [2024-10-28 18:16:30.327794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.084 [2024-10-28 18:16:30.458453] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.084 [2024-10-28 18:16:30.458527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:14.084 [2024-10-28 18:16:30.458546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.084 [2024-10-28 18:16:30.458557] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.084 [2024-10-28 18:16:30.458676] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.084 [2024-10-28 18:16:30.458695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:14.084 [2024-10-28 18:16:30.458713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.084 [2024-10-28 18:16:30.458723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.084 [2024-10-28 18:16:30.458789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.084 [2024-10-28 18:16:30.458809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:14.084 [2024-10-28 18:16:30.458821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.084 [2024-10-28 18:16:30.458831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.084 [2024-10-28 18:16:30.458987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.084 [2024-10-28 18:16:30.459015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:14.084 [2024-10-28 18:16:30.459029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.084 [2024-10-28 18:16:30.459047] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.084 [2024-10-28 18:16:30.459097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.084 [2024-10-28 18:16:30.459115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:14.084 [2024-10-28 18:16:30.459127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.084 [2024-10-28 18:16:30.459137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.084 [2024-10-28 18:16:30.459181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.084 [2024-10-28 18:16:30.459195] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:14.084 [2024-10-28 18:16:30.459206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.084 [2024-10-28 18:16:30.459224] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.084 [2024-10-28 18:16:30.459302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:14.084 [2024-10-28 18:16:30.459344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:14.084 [2024-10-28 18:16:30.459359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:14.084 [2024-10-28 18:16:30.459369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:14.084 [2024-10-28 18:16:30.459518] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 585.430 ms, result 0 00:26:15.019 00:26:15.019 00:26:15.019 18:16:31 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:26:17.551 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:26:17.551 18:16:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:17.551 [2024-10-28 18:16:33.699974] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:26:17.551 [2024-10-28 18:16:33.700333] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80084 ] 00:26:17.551 [2024-10-28 18:16:33.885145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.551 [2024-10-28 18:16:34.020876] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.149 [2024-10-28 18:16:34.398496] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:18.149 [2024-10-28 18:16:34.398578] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:18.149 [2024-10-28 18:16:34.563809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.563896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:18.149 [2024-10-28 18:16:34.563927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:18.149 [2024-10-28 18:16:34.563939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.564022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.564041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:18.149 [2024-10-28 18:16:34.564057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:18.149 [2024-10-28 18:16:34.564068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.564100] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:18.149 [2024-10-28 18:16:34.565167] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:18.149 [2024-10-28 18:16:34.565214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.565228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:18.149 [2024-10-28 18:16:34.565241] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.120 ms 00:26:18.149 [2024-10-28 18:16:34.565252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.566532] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:18.149 [2024-10-28 18:16:34.585096] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.585177] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:18.149 [2024-10-28 18:16:34.585199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.560 ms 00:26:18.149 [2024-10-28 18:16:34.585212] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.585350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.585372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:18.149 [2024-10-28 18:16:34.585385] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:26:18.149 [2024-10-28 18:16:34.585397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.591091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.591162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:18.149 [2024-10-28 18:16:34.591182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.550 ms 00:26:18.149 [2024-10-28 18:16:34.591194] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.591323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.591364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:18.149 [2024-10-28 18:16:34.591381] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:26:18.149 [2024-10-28 18:16:34.591393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.591478] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.591497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:18.149 [2024-10-28 18:16:34.591510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:18.149 [2024-10-28 18:16:34.591522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.591558] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:18.149 [2024-10-28 18:16:34.596204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.596274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:18.149 [2024-10-28 18:16:34.596304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.652 ms 00:26:18.149 [2024-10-28 18:16:34.596332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.596401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.596424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:18.149 [2024-10-28 18:16:34.596446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:26:18.149 [2024-10-28 18:16:34.596464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.596588] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:18.149 [2024-10-28 18:16:34.596638] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:18.149 [2024-10-28 18:16:34.596700] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:18.149 [2024-10-28 18:16:34.596744] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:18.149 [2024-10-28 18:16:34.596912] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:18.149 [2024-10-28 18:16:34.596947] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:18.149 [2024-10-28 18:16:34.596973] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:18.149 [2024-10-28 18:16:34.597000] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:18.149 [2024-10-28 18:16:34.597024] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:18.149 [2024-10-28 18:16:34.597045] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:18.149 [2024-10-28 18:16:34.597063] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:18.149 [2024-10-28 18:16:34.597078] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:18.149 [2024-10-28 18:16:34.597094] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:18.149 [2024-10-28 18:16:34.597122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.597140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:18.149 [2024-10-28 18:16:34.597160] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:26:18.149 [2024-10-28 18:16:34.597178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.597317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.149 [2024-10-28 18:16:34.597346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:18.149 [2024-10-28 18:16:34.597367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:26:18.149 [2024-10-28 18:16:34.597386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.149 [2024-10-28 18:16:34.597557] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:18.149 [2024-10-28 18:16:34.597597] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:18.149 [2024-10-28 18:16:34.597619] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:18.149 [2024-10-28 18:16:34.597641] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.149 [2024-10-28 18:16:34.597661] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:18.149 [2024-10-28 18:16:34.597680] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:18.149 [2024-10-28 18:16:34.597699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:18.149 [2024-10-28 18:16:34.597719] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:18.149 [2024-10-28 18:16:34.597738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:18.149 [2024-10-28 18:16:34.597757] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:18.149 [2024-10-28 18:16:34.597777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:18.149 [2024-10-28 18:16:34.597795] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:18.149 [2024-10-28 18:16:34.597812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:18.149 [2024-10-28 18:16:34.597830] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:18.149 [2024-10-28 18:16:34.597870] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:18.149 [2024-10-28 18:16:34.597908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.149 [2024-10-28 18:16:34.597927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:18.149 [2024-10-28 18:16:34.597945] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:18.149 [2024-10-28 18:16:34.597962] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.149 [2024-10-28 18:16:34.597980] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:18.149 [2024-10-28 18:16:34.597999] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:18.149 [2024-10-28 18:16:34.598016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.149 [2024-10-28 18:16:34.598041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:18.149 [2024-10-28 18:16:34.598058] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:18.149 [2024-10-28 18:16:34.598077] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.149 [2024-10-28 18:16:34.598095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:18.149 [2024-10-28 18:16:34.598112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:18.149 [2024-10-28 18:16:34.598130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.149 [2024-10-28 18:16:34.598147] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:18.150 [2024-10-28 18:16:34.598164] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:18.150 [2024-10-28 18:16:34.598180] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:18.150 [2024-10-28 18:16:34.598197] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:18.150 [2024-10-28 18:16:34.598215] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:18.150 [2024-10-28 18:16:34.598233] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:18.150 [2024-10-28 18:16:34.598249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:18.150 [2024-10-28 18:16:34.598267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:18.150 [2024-10-28 18:16:34.598286] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:18.150 [2024-10-28 18:16:34.598303] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:18.150 [2024-10-28 18:16:34.598322] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:18.150 [2024-10-28 18:16:34.598340] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.150 [2024-10-28 18:16:34.598358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:18.150 [2024-10-28 18:16:34.598376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:18.150 [2024-10-28 18:16:34.598395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.150 [2024-10-28 18:16:34.598413] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:18.150 [2024-10-28 18:16:34.598433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:18.150 [2024-10-28 18:16:34.598451] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:18.150 [2024-10-28 18:16:34.598471] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:18.150 [2024-10-28 18:16:34.598490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:18.150 [2024-10-28 18:16:34.598508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:18.150 [2024-10-28 18:16:34.598526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:18.150 [2024-10-28 18:16:34.598544] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:18.150 [2024-10-28 18:16:34.598561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:18.150 [2024-10-28 18:16:34.598580] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:18.150 [2024-10-28 18:16:34.598600] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:18.150 [2024-10-28 18:16:34.598623] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:18.150 [2024-10-28 18:16:34.598642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:18.150 [2024-10-28 18:16:34.598661] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:18.150 [2024-10-28 18:16:34.598679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:18.150 [2024-10-28 18:16:34.598697] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:18.150 [2024-10-28 18:16:34.598715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:18.150 [2024-10-28 18:16:34.598733] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:18.150 [2024-10-28 18:16:34.598752] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:18.150 [2024-10-28 18:16:34.598772] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:18.150 [2024-10-28 18:16:34.598792] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:18.150 [2024-10-28 18:16:34.598812] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:18.150 [2024-10-28 18:16:34.598852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:18.150 [2024-10-28 18:16:34.598878] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:18.150 [2024-10-28 18:16:34.598899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:18.150 [2024-10-28 18:16:34.598918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:18.150 [2024-10-28 18:16:34.598938] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:18.150 [2024-10-28 18:16:34.598974] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:18.150 [2024-10-28 18:16:34.598997] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:18.150 [2024-10-28 18:16:34.599019] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:18.150 [2024-10-28 18:16:34.599041] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:18.150 [2024-10-28 18:16:34.599063] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:18.150 [2024-10-28 18:16:34.599087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.150 [2024-10-28 18:16:34.599106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:18.150 [2024-10-28 18:16:34.599122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.618 ms 00:26:18.150 [2024-10-28 18:16:34.599137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.632744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.632816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:18.408 [2024-10-28 18:16:34.632865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.520 ms 00:26:18.408 [2024-10-28 18:16:34.632883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.633007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.633033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:18.408 [2024-10-28 18:16:34.633056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:26:18.408 [2024-10-28 18:16:34.633072] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.682368] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.682439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:18.408 [2024-10-28 18:16:34.682461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.186 ms 00:26:18.408 [2024-10-28 18:16:34.682472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.682550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.682567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:18.408 [2024-10-28 18:16:34.682580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:26:18.408 [2024-10-28 18:16:34.682598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.683037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.683058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:18.408 [2024-10-28 18:16:34.683071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:26:18.408 [2024-10-28 18:16:34.683083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.683241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.683263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:18.408 [2024-10-28 18:16:34.683275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:26:18.408 [2024-10-28 18:16:34.683293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.700426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.700713] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:18.408 [2024-10-28 18:16:34.700754] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.101 ms 00:26:18.408 [2024-10-28 18:16:34.700767] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.718307] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:18.408 [2024-10-28 18:16:34.718389] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:18.408 [2024-10-28 18:16:34.718412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.718424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:18.408 [2024-10-28 18:16:34.718439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.411 ms 00:26:18.408 [2024-10-28 18:16:34.718450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.749545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.749630] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:18.408 [2024-10-28 18:16:34.749651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 30.993 ms 00:26:18.408 [2024-10-28 18:16:34.749663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.766441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.766537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:18.408 [2024-10-28 18:16:34.766572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.708 ms 00:26:18.408 [2024-10-28 18:16:34.766592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.783105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.783192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:18.408 [2024-10-28 18:16:34.783213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.390 ms 00:26:18.408 [2024-10-28 18:16:34.783225] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.784192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.784233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:18.408 [2024-10-28 18:16:34.784261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.722 ms 00:26:18.408 [2024-10-28 18:16:34.784293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.861003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.861300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:18.408 [2024-10-28 18:16:34.861344] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 76.667 ms 00:26:18.408 [2024-10-28 18:16:34.861358] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.874345] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:18.408 [2024-10-28 18:16:34.877126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.877167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:18.408 [2024-10-28 18:16:34.877186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.689 ms 00:26:18.408 [2024-10-28 18:16:34.877198] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.877332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.877353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:18.408 [2024-10-28 18:16:34.877368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:18.408 [2024-10-28 18:16:34.877384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.878039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.878071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:18.408 [2024-10-28 18:16:34.878086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.598 ms 00:26:18.408 [2024-10-28 18:16:34.878098] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.878134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.878149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:18.408 [2024-10-28 18:16:34.878161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:26:18.408 [2024-10-28 18:16:34.878172] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.408 [2024-10-28 18:16:34.878224] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:18.408 [2024-10-28 18:16:34.878244] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.408 [2024-10-28 18:16:34.878256] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:18.408 [2024-10-28 18:16:34.878267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:26:18.408 [2024-10-28 18:16:34.878277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.666 [2024-10-28 18:16:34.910132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.666 [2024-10-28 18:16:34.910208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:18.666 [2024-10-28 18:16:34.910229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.826 ms 00:26:18.666 [2024-10-28 18:16:34.910253] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.666 [2024-10-28 18:16:34.910375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:18.666 [2024-10-28 18:16:34.910394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:18.666 [2024-10-28 18:16:34.910408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:18.666 [2024-10-28 18:16:34.910419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:18.666 [2024-10-28 18:16:34.911694] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 347.329 ms, result 0 00:26:20.042  [2024-10-28T18:16:37.457Z] Copying: 22/1024 [MB] (22 MBps) [2024-10-28T18:16:38.392Z] Copying: 45/1024 [MB] (23 MBps) [2024-10-28T18:16:39.326Z] Copying: 67/1024 [MB] (21 MBps) [2024-10-28T18:16:40.258Z] Copying: 91/1024 [MB] (23 MBps) [2024-10-28T18:16:41.192Z] Copying: 118/1024 [MB] (27 MBps) [2024-10-28T18:16:42.563Z] Copying: 147/1024 [MB] (28 MBps) [2024-10-28T18:16:43.494Z] Copying: 175/1024 [MB] (27 MBps) [2024-10-28T18:16:44.425Z] Copying: 201/1024 [MB] (26 MBps) [2024-10-28T18:16:45.356Z] Copying: 229/1024 [MB] (28 MBps) [2024-10-28T18:16:46.289Z] Copying: 256/1024 [MB] (26 MBps) [2024-10-28T18:16:47.222Z] Copying: 285/1024 [MB] (29 MBps) [2024-10-28T18:16:48.154Z] Copying: 312/1024 [MB] (27 MBps) [2024-10-28T18:16:49.528Z] Copying: 340/1024 [MB] (27 MBps) [2024-10-28T18:16:50.469Z] Copying: 365/1024 [MB] (24 MBps) [2024-10-28T18:16:51.406Z] Copying: 387/1024 [MB] (21 MBps) [2024-10-28T18:16:52.341Z] Copying: 412/1024 [MB] (24 MBps) [2024-10-28T18:16:53.297Z] Copying: 435/1024 [MB] (23 MBps) [2024-10-28T18:16:54.235Z] Copying: 460/1024 [MB] (24 MBps) [2024-10-28T18:16:55.168Z] Copying: 481/1024 [MB] (21 MBps) [2024-10-28T18:16:56.540Z] Copying: 505/1024 [MB] (24 MBps) [2024-10-28T18:16:57.477Z] Copying: 527/1024 [MB] (21 MBps) [2024-10-28T18:16:58.411Z] Copying: 547/1024 [MB] (20 MBps) [2024-10-28T18:16:59.342Z] Copying: 572/1024 [MB] (25 MBps) [2024-10-28T18:17:00.273Z] Copying: 596/1024 [MB] (23 MBps) [2024-10-28T18:17:01.202Z] Copying: 620/1024 [MB] (24 MBps) [2024-10-28T18:17:02.134Z] Copying: 648/1024 [MB] (27 MBps) [2024-10-28T18:17:03.514Z] Copying: 674/1024 [MB] (26 MBps) [2024-10-28T18:17:04.449Z] Copying: 697/1024 [MB] (23 MBps) [2024-10-28T18:17:05.385Z] Copying: 716/1024 [MB] (19 MBps) [2024-10-28T18:17:06.318Z] Copying: 736/1024 [MB] (20 MBps) [2024-10-28T18:17:07.300Z] Copying: 760/1024 [MB] (24 MBps) [2024-10-28T18:17:08.234Z] Copying: 783/1024 [MB] (22 MBps) [2024-10-28T18:17:09.168Z] Copying: 797/1024 [MB] (13 MBps) [2024-10-28T18:17:10.539Z] Copying: 821008/1048576 [kB] (4816 kBps) [2024-10-28T18:17:11.471Z] Copying: 813/1024 [MB] (11 MBps) [2024-10-28T18:17:12.403Z] Copying: 842032/1048576 [kB] (8856 kBps) [2024-10-28T18:17:13.335Z] Copying: 847668/1048576 [kB] (5636 kBps) [2024-10-28T18:17:14.310Z] Copying: 856120/1048576 [kB] (8452 kBps) [2024-10-28T18:17:15.242Z] Copying: 862032/1048576 [kB] (5912 kBps) [2024-10-28T18:17:16.174Z] Copying: 852/1024 [MB] (11 MBps) [2024-10-28T18:17:17.545Z] Copying: 871/1024 [MB] (18 MBps) [2024-10-28T18:17:18.477Z] Copying: 891/1024 [MB] (20 MBps) [2024-10-28T18:17:19.411Z] Copying: 908/1024 [MB] (17 MBps) [2024-10-28T18:17:20.344Z] Copying: 934/1024 [MB] (25 MBps) [2024-10-28T18:17:21.279Z] Copying: 957/1024 [MB] (23 MBps) [2024-10-28T18:17:22.211Z] Copying: 975/1024 [MB] (18 MBps) [2024-10-28T18:17:23.145Z] Copying: 995/1024 [MB] (19 MBps) [2024-10-28T18:17:24.556Z] Copying: 1013/1024 [MB] (17 MBps) [2024-10-28T18:17:24.556Z] Copying: 1045404/1048576 [kB] (7608 kBps) [2024-10-28T18:17:24.556Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-10-28 18:17:24.481891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.078 [2024-10-28 18:17:24.481996] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:08.078 [2024-10-28 18:17:24.482028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:08.078 [2024-10-28 18:17:24.482046] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.078 [2024-10-28 18:17:24.482091] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:08.078 [2024-10-28 18:17:24.487760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.078 [2024-10-28 18:17:24.487857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:08.078 [2024-10-28 18:17:24.487903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.623 ms 00:27:08.078 [2024-10-28 18:17:24.487920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.078 [2024-10-28 18:17:24.488405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.078 [2024-10-28 18:17:24.488610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:08.078 [2024-10-28 18:17:24.488644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.403 ms 00:27:08.078 [2024-10-28 18:17:24.488662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.078 [2024-10-28 18:17:24.496370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.078 [2024-10-28 18:17:24.496697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:08.078 [2024-10-28 18:17:24.497030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.662 ms 00:27:08.078 [2024-10-28 18:17:24.497109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.078 [2024-10-28 18:17:24.510970] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.078 [2024-10-28 18:17:24.511304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:08.078 [2024-10-28 18:17:24.515240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.439 ms 00:27:08.078 [2024-10-28 18:17:24.515290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.338 [2024-10-28 18:17:24.565615] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.338 [2024-10-28 18:17:24.566117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:08.338 [2024-10-28 18:17:24.566211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.132 ms 00:27:08.338 [2024-10-28 18:17:24.566271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.338 [2024-10-28 18:17:24.594127] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.338 [2024-10-28 18:17:24.594586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:08.338 [2024-10-28 18:17:24.594853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.692 ms 00:27:08.338 [2024-10-28 18:17:24.594952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.338 [2024-10-28 18:17:24.607935] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.338 [2024-10-28 18:17:24.608276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:08.338 [2024-10-28 18:17:24.608527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.807 ms 00:27:08.338 [2024-10-28 18:17:24.608614] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.338 [2024-10-28 18:17:24.658122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.338 [2024-10-28 18:17:24.658412] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:08.338 [2024-10-28 18:17:24.658459] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.405 ms 00:27:08.338 [2024-10-28 18:17:24.658484] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.338 [2024-10-28 18:17:24.707679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.338 [2024-10-28 18:17:24.708189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:08.338 [2024-10-28 18:17:24.708239] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 49.087 ms 00:27:08.338 [2024-10-28 18:17:24.708262] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.338 [2024-10-28 18:17:24.756758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.338 [2024-10-28 18:17:24.756912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:08.338 [2024-10-28 18:17:24.756950] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.388 ms 00:27:08.338 [2024-10-28 18:17:24.756971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.338 [2024-10-28 18:17:24.805956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.338 [2024-10-28 18:17:24.806087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:08.338 [2024-10-28 18:17:24.806126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.767 ms 00:27:08.338 [2024-10-28 18:17:24.806146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.338 [2024-10-28 18:17:24.806262] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:08.338 [2024-10-28 18:17:24.806299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:27:08.338 [2024-10-28 18:17:24.806340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:27:08.338 [2024-10-28 18:17:24.806361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806757] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:08.338 [2024-10-28 18:17:24.806980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807020] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807120] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807202] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807282] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807880] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807920] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807942] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.807982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808164] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808204] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808341] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808462] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:08.339 [2024-10-28 18:17:24.808536] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:08.339 [2024-10-28 18:17:24.808570] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 97f41789-0530-4005-80e7-a5ffa4625272 00:27:08.339 [2024-10-28 18:17:24.808590] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:27:08.339 [2024-10-28 18:17:24.808608] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:27:08.339 [2024-10-28 18:17:24.808626] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:27:08.340 [2024-10-28 18:17:24.808646] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:27:08.340 [2024-10-28 18:17:24.808664] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:08.340 [2024-10-28 18:17:24.808682] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:08.340 [2024-10-28 18:17:24.808722] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:08.340 [2024-10-28 18:17:24.808740] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:08.340 [2024-10-28 18:17:24.808756] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:08.340 [2024-10-28 18:17:24.808777] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.340 [2024-10-28 18:17:24.808797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:08.340 [2024-10-28 18:17:24.808821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.518 ms 00:27:08.340 [2024-10-28 18:17:24.808861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-10-28 18:17:24.834330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.598 [2024-10-28 18:17:24.834614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:08.598 [2024-10-28 18:17:24.834658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.341 ms 00:27:08.598 [2024-10-28 18:17:24.834677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-10-28 18:17:24.835396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:08.598 [2024-10-28 18:17:24.835438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:08.598 [2024-10-28 18:17:24.835473] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:27:08.598 [2024-10-28 18:17:24.835490] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-10-28 18:17:24.908295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-10-28 18:17:24.908386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:08.598 [2024-10-28 18:17:24.908414] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-10-28 18:17:24.908431] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-10-28 18:17:24.908590] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-10-28 18:17:24.908610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:08.598 [2024-10-28 18:17:24.908641] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-10-28 18:17:24.908658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-10-28 18:17:24.908870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-10-28 18:17:24.908898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:08.598 [2024-10-28 18:17:24.908917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-10-28 18:17:24.908935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-10-28 18:17:24.908979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-10-28 18:17:24.908999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:08.598 [2024-10-28 18:17:24.909016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-10-28 18:17:24.909040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.598 [2024-10-28 18:17:25.050668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.598 [2024-10-28 18:17:25.050772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:08.598 [2024-10-28 18:17:25.050801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.598 [2024-10-28 18:17:25.050819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.855 [2024-10-28 18:17:25.167083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.856 [2024-10-28 18:17:25.167183] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:08.856 [2024-10-28 18:17:25.167212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.856 [2024-10-28 18:17:25.167247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-10-28 18:17:25.167439] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.856 [2024-10-28 18:17:25.167465] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:08.856 [2024-10-28 18:17:25.167482] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.856 [2024-10-28 18:17:25.167498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-10-28 18:17:25.167653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.856 [2024-10-28 18:17:25.167675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:08.856 [2024-10-28 18:17:25.167693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.856 [2024-10-28 18:17:25.167708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-10-28 18:17:25.167988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.856 [2024-10-28 18:17:25.168017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:08.856 [2024-10-28 18:17:25.168035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.856 [2024-10-28 18:17:25.168052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-10-28 18:17:25.168152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.856 [2024-10-28 18:17:25.168176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:08.856 [2024-10-28 18:17:25.168194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.856 [2024-10-28 18:17:25.168228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-10-28 18:17:25.168311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.856 [2024-10-28 18:17:25.168334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:08.856 [2024-10-28 18:17:25.168351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.856 [2024-10-28 18:17:25.168368] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-10-28 18:17:25.168475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:08.856 [2024-10-28 18:17:25.168498] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:08.856 [2024-10-28 18:17:25.168515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:08.856 [2024-10-28 18:17:25.168531] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:08.856 [2024-10-28 18:17:25.168889] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 686.899 ms, result 0 00:27:10.229 00:27:10.229 00:27:10.229 18:17:26 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:12.759 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:27:12.759 18:17:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:27:12.759 18:17:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:27:12.759 18:17:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:12.759 18:17:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:27:12.760 18:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:13.017 18:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:27:13.017 18:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:27:13.017 Process with pid 78217 is not found 00:27:13.017 18:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 78217 00:27:13.017 18:17:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@952 -- # '[' -z 78217 ']' 00:27:13.017 18:17:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@956 -- # kill -0 78217 00:27:13.017 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (78217) - No such process 00:27:13.017 18:17:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@979 -- # echo 'Process with pid 78217 is not found' 00:27:13.017 18:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:27:13.275 Remove shared memory files 00:27:13.275 18:17:29 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:27:13.275 18:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:27:13.275 18:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:27:13.275 18:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:27:13.275 18:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:27:13.275 18:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:27:13.275 18:17:29 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:27:13.275 ************************************ 00:27:13.275 END TEST ftl_dirty_shutdown 00:27:13.275 ************************************ 00:27:13.275 00:27:13.275 real 3m58.237s 00:27:13.275 user 4m29.356s 00:27:13.275 sys 0m39.062s 00:27:13.275 18:17:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:13.275 18:17:29 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:13.534 18:17:29 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:13.535 18:17:29 ftl -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:27:13.535 18:17:29 ftl -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:13.535 18:17:29 ftl -- common/autotest_common.sh@10 -- # set +x 00:27:13.535 ************************************ 00:27:13.535 START TEST ftl_upgrade_shutdown 00:27:13.535 ************************************ 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1127 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:27:13.535 * Looking for test storage... 00:27:13.535 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:13.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.535 --rc genhtml_branch_coverage=1 00:27:13.535 --rc genhtml_function_coverage=1 00:27:13.535 --rc genhtml_legend=1 00:27:13.535 --rc geninfo_all_blocks=1 00:27:13.535 --rc geninfo_unexecuted_blocks=1 00:27:13.535 00:27:13.535 ' 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:13.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.535 --rc genhtml_branch_coverage=1 00:27:13.535 --rc genhtml_function_coverage=1 00:27:13.535 --rc genhtml_legend=1 00:27:13.535 --rc geninfo_all_blocks=1 00:27:13.535 --rc geninfo_unexecuted_blocks=1 00:27:13.535 00:27:13.535 ' 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:13.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.535 --rc genhtml_branch_coverage=1 00:27:13.535 --rc genhtml_function_coverage=1 00:27:13.535 --rc genhtml_legend=1 00:27:13.535 --rc geninfo_all_blocks=1 00:27:13.535 --rc geninfo_unexecuted_blocks=1 00:27:13.535 00:27:13.535 ' 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:13.535 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.535 --rc genhtml_branch_coverage=1 00:27:13.535 --rc genhtml_function_coverage=1 00:27:13.535 --rc genhtml_legend=1 00:27:13.535 --rc geninfo_all_blocks=1 00:27:13.535 --rc geninfo_unexecuted_blocks=1 00:27:13.535 00:27:13.535 ' 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:27:13.535 18:17:29 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:27:13.535 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:27:13.797 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=80691 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 80691 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80691 ']' 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.798 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:13.798 18:17:30 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:13.798 [2024-10-28 18:17:30.221193] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:27:13.798 [2024-10-28 18:17:30.221922] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80691 ] 00:27:14.055 [2024-10-28 18:17:30.427966] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.314 [2024-10-28 18:17:30.614660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:27:15.682 18:17:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:27:15.939 18:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:27:15.939 18:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:27:15.939 18:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:27:15.939 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=basen1 00:27:15.939 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:15.939 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:15.939 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:15.939 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:16.504 { 00:27:16.504 "name": "basen1", 00:27:16.504 "aliases": [ 00:27:16.504 "b0a04482-17c9-4329-b6d4-a9136ee08400" 00:27:16.504 ], 00:27:16.504 "product_name": "NVMe disk", 00:27:16.504 "block_size": 4096, 00:27:16.504 "num_blocks": 1310720, 00:27:16.504 "uuid": "b0a04482-17c9-4329-b6d4-a9136ee08400", 00:27:16.504 "numa_id": -1, 00:27:16.504 "assigned_rate_limits": { 00:27:16.504 "rw_ios_per_sec": 0, 00:27:16.504 "rw_mbytes_per_sec": 0, 00:27:16.504 "r_mbytes_per_sec": 0, 00:27:16.504 "w_mbytes_per_sec": 0 00:27:16.504 }, 00:27:16.504 "claimed": true, 00:27:16.504 "claim_type": "read_many_write_one", 00:27:16.504 "zoned": false, 00:27:16.504 "supported_io_types": { 00:27:16.504 "read": true, 00:27:16.504 "write": true, 00:27:16.504 "unmap": true, 00:27:16.504 "flush": true, 00:27:16.504 "reset": true, 00:27:16.504 "nvme_admin": true, 00:27:16.504 "nvme_io": true, 00:27:16.504 "nvme_io_md": false, 00:27:16.504 "write_zeroes": true, 00:27:16.504 "zcopy": false, 00:27:16.504 "get_zone_info": false, 00:27:16.504 "zone_management": false, 00:27:16.504 "zone_append": false, 00:27:16.504 "compare": true, 00:27:16.504 "compare_and_write": false, 00:27:16.504 "abort": true, 00:27:16.504 "seek_hole": false, 00:27:16.504 "seek_data": false, 00:27:16.504 "copy": true, 00:27:16.504 "nvme_iov_md": false 00:27:16.504 }, 00:27:16.504 "driver_specific": { 00:27:16.504 "nvme": [ 00:27:16.504 { 00:27:16.504 "pci_address": "0000:00:11.0", 00:27:16.504 "trid": { 00:27:16.504 "trtype": "PCIe", 00:27:16.504 "traddr": "0000:00:11.0" 00:27:16.504 }, 00:27:16.504 "ctrlr_data": { 00:27:16.504 "cntlid": 0, 00:27:16.504 "vendor_id": "0x1b36", 00:27:16.504 "model_number": "QEMU NVMe Ctrl", 00:27:16.504 "serial_number": "12341", 00:27:16.504 "firmware_revision": "8.0.0", 00:27:16.504 "subnqn": "nqn.2019-08.org.qemu:12341", 00:27:16.504 "oacs": { 00:27:16.504 "security": 0, 00:27:16.504 "format": 1, 00:27:16.504 "firmware": 0, 00:27:16.504 "ns_manage": 1 00:27:16.504 }, 00:27:16.504 "multi_ctrlr": false, 00:27:16.504 "ana_reporting": false 00:27:16.504 }, 00:27:16.504 "vs": { 00:27:16.504 "nvme_version": "1.4" 00:27:16.504 }, 00:27:16.504 "ns_data": { 00:27:16.504 "id": 1, 00:27:16.504 "can_share": false 00:27:16.504 } 00:27:16.504 } 00:27:16.504 ], 00:27:16.504 "mp_policy": "active_passive" 00:27:16.504 } 00:27:16.504 } 00:27:16.504 ]' 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=1310720 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=5120 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 5120 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:27:16.504 18:17:32 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:17.070 18:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=061186b3-fefb-4102-90a9-08187fe40ffb 00:27:17.070 18:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:27:17.070 18:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 061186b3-fefb-4102-90a9-08187fe40ffb 00:27:17.647 18:17:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:27:17.934 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=1a8e1e43-4a63-4add-8e93-a4b4f0c2945f 00:27:17.934 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 1a8e1e43-4a63-4add-8e93-a4b4f0c2945f 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=0217bf0d-ad3a-4335-9e16-88d1a4800c0a 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 0217bf0d-ad3a-4335-9e16-88d1a4800c0a ]] 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 0217bf0d-ad3a-4335-9e16-88d1a4800c0a 5120 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=0217bf0d-ad3a-4335-9e16-88d1a4800c0a 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 0217bf0d-ad3a-4335-9e16-88d1a4800c0a 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1380 -- # local bdev_name=0217bf0d-ad3a-4335-9e16-88d1a4800c0a 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1381 -- # local bdev_info 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bs 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local nb 00:27:18.500 18:17:34 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0217bf0d-ad3a-4335-9e16-88d1a4800c0a 00:27:18.759 18:17:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:27:18.759 { 00:27:18.759 "name": "0217bf0d-ad3a-4335-9e16-88d1a4800c0a", 00:27:18.759 "aliases": [ 00:27:18.759 "lvs/basen1p0" 00:27:18.759 ], 00:27:18.759 "product_name": "Logical Volume", 00:27:18.759 "block_size": 4096, 00:27:18.759 "num_blocks": 5242880, 00:27:18.759 "uuid": "0217bf0d-ad3a-4335-9e16-88d1a4800c0a", 00:27:18.759 "assigned_rate_limits": { 00:27:18.759 "rw_ios_per_sec": 0, 00:27:18.759 "rw_mbytes_per_sec": 0, 00:27:18.759 "r_mbytes_per_sec": 0, 00:27:18.759 "w_mbytes_per_sec": 0 00:27:18.759 }, 00:27:18.759 "claimed": false, 00:27:18.759 "zoned": false, 00:27:18.759 "supported_io_types": { 00:27:18.759 "read": true, 00:27:18.759 "write": true, 00:27:18.759 "unmap": true, 00:27:18.759 "flush": false, 00:27:18.759 "reset": true, 00:27:18.759 "nvme_admin": false, 00:27:18.759 "nvme_io": false, 00:27:18.759 "nvme_io_md": false, 00:27:18.759 "write_zeroes": true, 00:27:18.759 "zcopy": false, 00:27:18.759 "get_zone_info": false, 00:27:18.759 "zone_management": false, 00:27:18.759 "zone_append": false, 00:27:18.759 "compare": false, 00:27:18.759 "compare_and_write": false, 00:27:18.759 "abort": false, 00:27:18.759 "seek_hole": true, 00:27:18.759 "seek_data": true, 00:27:18.759 "copy": false, 00:27:18.759 "nvme_iov_md": false 00:27:18.759 }, 00:27:18.759 "driver_specific": { 00:27:18.759 "lvol": { 00:27:18.759 "lvol_store_uuid": "1a8e1e43-4a63-4add-8e93-a4b4f0c2945f", 00:27:18.759 "base_bdev": "basen1", 00:27:18.759 "thin_provision": true, 00:27:18.759 "num_allocated_clusters": 0, 00:27:18.759 "snapshot": false, 00:27:18.759 "clone": false, 00:27:18.759 "esnap_clone": false 00:27:18.759 } 00:27:18.759 } 00:27:18.759 } 00:27:18.759 ]' 00:27:18.759 18:17:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:27:19.016 18:17:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # bs=4096 00:27:19.016 18:17:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:27:19.016 18:17:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # nb=5242880 00:27:19.016 18:17:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1389 -- # bdev_size=20480 00:27:19.016 18:17:35 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1390 -- # echo 20480 00:27:19.016 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:27:19.016 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:27:19.016 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:27:19.274 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:27:19.274 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:27:19.274 18:17:35 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:27:19.840 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:27:19.840 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:27:19.840 18:17:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 0217bf0d-ad3a-4335-9e16-88d1a4800c0a -c cachen1p0 --l2p_dram_limit 2 00:27:20.099 [2024-10-28 18:17:36.343728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.099 [2024-10-28 18:17:36.343819] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:27:20.099 [2024-10-28 18:17:36.343890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.013 ms 00:27:20.099 [2024-10-28 18:17:36.343919] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.099 [2024-10-28 18:17:36.344049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.099 [2024-10-28 18:17:36.344089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:27:20.099 [2024-10-28 18:17:36.344113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:27:20.099 [2024-10-28 18:17:36.344137] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.099 [2024-10-28 18:17:36.344196] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:27:20.099 [2024-10-28 18:17:36.345604] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:27:20.099 [2024-10-28 18:17:36.345678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.099 [2024-10-28 18:17:36.345710] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:27:20.099 [2024-10-28 18:17:36.345740] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.490 ms 00:27:20.099 [2024-10-28 18:17:36.345757] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.099 [2024-10-28 18:17:36.345938] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID 32bdcf0b-f3ee-4351-9671-39840621531e 00:27:20.099 [2024-10-28 18:17:36.347360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.099 [2024-10-28 18:17:36.347429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:27:20.099 [2024-10-28 18:17:36.347455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.031 ms 00:27:20.099 [2024-10-28 18:17:36.347481] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.099 [2024-10-28 18:17:36.353710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.099 [2024-10-28 18:17:36.353827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:27:20.099 [2024-10-28 18:17:36.353897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.061 ms 00:27:20.099 [2024-10-28 18:17:36.353930] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.099 [2024-10-28 18:17:36.354070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.099 [2024-10-28 18:17:36.354101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:27:20.099 [2024-10-28 18:17:36.354122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.054 ms 00:27:20.099 [2024-10-28 18:17:36.354147] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.099 [2024-10-28 18:17:36.354286] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.099 [2024-10-28 18:17:36.354326] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:27:20.099 [2024-10-28 18:17:36.354346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:27:20.099 [2024-10-28 18:17:36.354377] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.099 [2024-10-28 18:17:36.354445] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:27:20.099 [2024-10-28 18:17:36.362203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.100 [2024-10-28 18:17:36.362291] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:27:20.100 [2024-10-28 18:17:36.362326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.766 ms 00:27:20.100 [2024-10-28 18:17:36.362346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.100 [2024-10-28 18:17:36.362429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.100 [2024-10-28 18:17:36.362461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:27:20.100 [2024-10-28 18:17:36.362490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:27:20.100 [2024-10-28 18:17:36.362513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.100 [2024-10-28 18:17:36.362630] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:27:20.100 [2024-10-28 18:17:36.362889] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:27:20.100 [2024-10-28 18:17:36.362944] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:27:20.100 [2024-10-28 18:17:36.362977] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:27:20.100 [2024-10-28 18:17:36.363013] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:27:20.100 [2024-10-28 18:17:36.363041] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:27:20.100 [2024-10-28 18:17:36.363068] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:27:20.100 [2024-10-28 18:17:36.363090] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:27:20.100 [2024-10-28 18:17:36.363123] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:27:20.100 [2024-10-28 18:17:36.363144] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:27:20.100 [2024-10-28 18:17:36.363173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.100 [2024-10-28 18:17:36.363196] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:27:20.100 [2024-10-28 18:17:36.363219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.547 ms 00:27:20.100 [2024-10-28 18:17:36.363243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.100 [2024-10-28 18:17:36.363390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.100 [2024-10-28 18:17:36.363422] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:27:20.100 [2024-10-28 18:17:36.363456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:27:20.100 [2024-10-28 18:17:36.363487] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.100 [2024-10-28 18:17:36.363680] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:27:20.100 [2024-10-28 18:17:36.363716] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:27:20.100 [2024-10-28 18:17:36.363744] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:20.100 [2024-10-28 18:17:36.363769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:20.100 [2024-10-28 18:17:36.363797] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:27:20.100 [2024-10-28 18:17:36.363820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:27:20.100 [2024-10-28 18:17:36.363872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:27:20.100 [2024-10-28 18:17:36.363911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:27:20.100 [2024-10-28 18:17:36.363940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:27:20.100 [2024-10-28 18:17:36.363963] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:20.100 [2024-10-28 18:17:36.363990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:27:20.100 [2024-10-28 18:17:36.364008] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:27:20.100 [2024-10-28 18:17:36.364022] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:20.100 [2024-10-28 18:17:36.364040] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:27:20.100 [2024-10-28 18:17:36.364066] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:27:20.100 [2024-10-28 18:17:36.364087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:20.100 [2024-10-28 18:17:36.364116] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:27:20.100 [2024-10-28 18:17:36.364139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:27:20.100 [2024-10-28 18:17:36.364161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:20.100 [2024-10-28 18:17:36.364175] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:27:20.100 [2024-10-28 18:17:36.364200] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:27:20.100 [2024-10-28 18:17:36.364223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:20.100 [2024-10-28 18:17:36.364251] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:27:20.100 [2024-10-28 18:17:36.364276] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:27:20.100 [2024-10-28 18:17:36.364300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:20.100 [2024-10-28 18:17:36.364322] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:27:20.100 [2024-10-28 18:17:36.364347] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:27:20.100 [2024-10-28 18:17:36.364368] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:20.100 [2024-10-28 18:17:36.364393] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:27:20.100 [2024-10-28 18:17:36.364418] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:27:20.100 [2024-10-28 18:17:36.364443] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:27:20.100 [2024-10-28 18:17:36.364466] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:27:20.100 [2024-10-28 18:17:36.364489] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:27:20.100 [2024-10-28 18:17:36.364511] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:20.100 [2024-10-28 18:17:36.364538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:27:20.100 [2024-10-28 18:17:36.364561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:27:20.100 [2024-10-28 18:17:36.364588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:20.100 [2024-10-28 18:17:36.364610] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:27:20.100 [2024-10-28 18:17:36.364627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:27:20.100 [2024-10-28 18:17:36.364648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:20.100 [2024-10-28 18:17:36.364671] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:27:20.100 [2024-10-28 18:17:36.364694] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:27:20.100 [2024-10-28 18:17:36.364720] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:20.100 [2024-10-28 18:17:36.364742] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:27:20.100 [2024-10-28 18:17:36.364770] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:27:20.100 [2024-10-28 18:17:36.364793] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:27:20.100 [2024-10-28 18:17:36.364814] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:27:20.100 [2024-10-28 18:17:36.364857] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:27:20.100 [2024-10-28 18:17:36.364898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:27:20.100 [2024-10-28 18:17:36.364924] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:27:20.100 [2024-10-28 18:17:36.364950] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:27:20.100 [2024-10-28 18:17:36.364973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:27:20.100 [2024-10-28 18:17:36.364994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:27:20.100 [2024-10-28 18:17:36.365035] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:27:20.100 [2024-10-28 18:17:36.365067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:20.100 [2024-10-28 18:17:36.365097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:27:20.100 [2024-10-28 18:17:36.365126] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:27:20.100 [2024-10-28 18:17:36.365143] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:27:20.100 [2024-10-28 18:17:36.365170] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:27:20.100 [2024-10-28 18:17:36.365193] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:27:20.100 [2024-10-28 18:17:36.365216] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:27:20.100 [2024-10-28 18:17:36.365242] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:27:20.100 [2024-10-28 18:17:36.365268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:27:20.100 [2024-10-28 18:17:36.365291] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:27:20.100 [2024-10-28 18:17:36.365320] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:27:20.100 [2024-10-28 18:17:36.365344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:27:20.100 [2024-10-28 18:17:36.365372] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:27:20.100 [2024-10-28 18:17:36.365397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:27:20.100 [2024-10-28 18:17:36.365424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:27:20.100 [2024-10-28 18:17:36.365443] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:27:20.100 [2024-10-28 18:17:36.365488] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:20.100 [2024-10-28 18:17:36.365515] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:20.100 [2024-10-28 18:17:36.365546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:27:20.100 [2024-10-28 18:17:36.365571] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:27:20.100 [2024-10-28 18:17:36.365599] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:27:20.100 [2024-10-28 18:17:36.365625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:20.100 [2024-10-28 18:17:36.365655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:27:20.100 [2024-10-28 18:17:36.365681] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.059 ms 00:27:20.100 [2024-10-28 18:17:36.365707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:20.101 [2024-10-28 18:17:36.365821] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:27:20.101 [2024-10-28 18:17:36.365888] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:27:23.382 [2024-10-28 18:17:39.232043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.232155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:27:23.382 [2024-10-28 18:17:39.232194] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2866.237 ms 00:27:23.382 [2024-10-28 18:17:39.232222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.277706] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.277777] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:27:23.382 [2024-10-28 18:17:39.277800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 45.076 ms 00:27:23.382 [2024-10-28 18:17:39.277816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.277967] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.277994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:27:23.382 [2024-10-28 18:17:39.278009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:27:23.382 [2024-10-28 18:17:39.278026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.324982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.325070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:27:23.382 [2024-10-28 18:17:39.325092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 46.871 ms 00:27:23.382 [2024-10-28 18:17:39.325107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.325172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.325199] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:27:23.382 [2024-10-28 18:17:39.325213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:23.382 [2024-10-28 18:17:39.325228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.325628] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.325653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:27:23.382 [2024-10-28 18:17:39.325668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.313 ms 00:27:23.382 [2024-10-28 18:17:39.325683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.325749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.325768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:27:23.382 [2024-10-28 18:17:39.325783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:27:23.382 [2024-10-28 18:17:39.325800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.343377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.343446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:27:23.382 [2024-10-28 18:17:39.343475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.548 ms 00:27:23.382 [2024-10-28 18:17:39.343491] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.357350] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:27:23.382 [2024-10-28 18:17:39.358330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.358365] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:27:23.382 [2024-10-28 18:17:39.358388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.684 ms 00:27:23.382 [2024-10-28 18:17:39.358402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.397321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.397551] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:27:23.382 [2024-10-28 18:17:39.397591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 38.849 ms 00:27:23.382 [2024-10-28 18:17:39.397607] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.397729] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.397751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:27:23.382 [2024-10-28 18:17:39.397771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:27:23.382 [2024-10-28 18:17:39.397784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.429730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.429827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:27:23.382 [2024-10-28 18:17:39.429888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.810 ms 00:27:23.382 [2024-10-28 18:17:39.429904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.461918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.461986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:27:23.382 [2024-10-28 18:17:39.462011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.942 ms 00:27:23.382 [2024-10-28 18:17:39.462024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.462758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.462783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:27:23.382 [2024-10-28 18:17:39.462801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.681 ms 00:27:23.382 [2024-10-28 18:17:39.462814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.382 [2024-10-28 18:17:39.563500] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.382 [2024-10-28 18:17:39.563571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:27:23.382 [2024-10-28 18:17:39.563601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 100.575 ms 00:27:23.383 [2024-10-28 18:17:39.563631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.383 [2024-10-28 18:17:39.596985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.383 [2024-10-28 18:17:39.597066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:27:23.383 [2024-10-28 18:17:39.597108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.180 ms 00:27:23.383 [2024-10-28 18:17:39.597122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.383 [2024-10-28 18:17:39.629754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.383 [2024-10-28 18:17:39.629827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:27:23.383 [2024-10-28 18:17:39.629869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.536 ms 00:27:23.383 [2024-10-28 18:17:39.629883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.383 [2024-10-28 18:17:39.661974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.383 [2024-10-28 18:17:39.662055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:27:23.383 [2024-10-28 18:17:39.662081] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.000 ms 00:27:23.383 [2024-10-28 18:17:39.662094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.383 [2024-10-28 18:17:39.662189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.383 [2024-10-28 18:17:39.662208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:27:23.383 [2024-10-28 18:17:39.662228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:27:23.383 [2024-10-28 18:17:39.662241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.383 [2024-10-28 18:17:39.662392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:23.383 [2024-10-28 18:17:39.662413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:27:23.383 [2024-10-28 18:17:39.662432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:27:23.383 [2024-10-28 18:17:39.662445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:23.383 [2024-10-28 18:17:39.663562] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 3319.379 ms, result 0 00:27:23.383 { 00:27:23.383 "name": "ftl", 00:27:23.383 "uuid": "32bdcf0b-f3ee-4351-9671-39840621531e" 00:27:23.383 } 00:27:23.383 18:17:39 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:27:23.651 [2024-10-28 18:17:40.030936] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.651 18:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:27:23.958 18:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:27:24.216 [2024-10-28 18:17:40.667730] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:27:24.216 18:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:27:24.780 [2024-10-28 18:17:40.981502] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:24.780 18:17:40 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:27:25.037 Fill FTL, iteration 1 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=80836 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:27:25.037 18:17:41 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 80836 /var/tmp/spdk.tgt.sock 00:27:25.038 18:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 80836 ']' 00:27:25.038 18:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:27:25.038 18:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:25.038 18:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:27:25.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:27:25.038 18:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:25.038 18:17:41 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:25.295 [2024-10-28 18:17:41.550562] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:27:25.295 [2024-10-28 18:17:41.551358] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80836 ] 00:27:25.553 [2024-10-28 18:17:41.779005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:25.553 [2024-10-28 18:17:41.940758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.486 18:17:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:26.486 18:17:42 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:27:26.486 18:17:42 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:27:26.743 ftln1 00:27:26.743 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:27:26.743 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 80836 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80836 ']' 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80836 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80836 00:27:27.001 killing process with pid 80836 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80836' 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80836 00:27:27.001 18:17:43 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80836 00:27:29.528 18:17:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:27:29.528 18:17:45 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:27:29.528 [2024-10-28 18:17:45.566495] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:27:29.528 [2024-10-28 18:17:45.566991] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80890 ] 00:27:29.528 [2024-10-28 18:17:45.754514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.528 [2024-10-28 18:17:45.857626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:30.909  [2024-10-28T18:17:48.318Z] Copying: 204/1024 [MB] (204 MBps) [2024-10-28T18:17:49.705Z] Copying: 410/1024 [MB] (206 MBps) [2024-10-28T18:17:50.639Z] Copying: 615/1024 [MB] (205 MBps) [2024-10-28T18:17:51.579Z] Copying: 804/1024 [MB] (189 MBps) [2024-10-28T18:17:51.579Z] Copying: 993/1024 [MB] (189 MBps) [2024-10-28T18:17:52.517Z] Copying: 1024/1024 [MB] (average 198 MBps) 00:27:36.039 00:27:36.039 18:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:27:36.039 Calculate MD5 checksum, iteration 1 00:27:36.039 18:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:27:36.039 18:17:52 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:36.039 18:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:36.039 18:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:36.039 18:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:36.039 18:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:36.039 18:17:52 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:27:36.297 [2024-10-28 18:17:52.598862] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:27:36.297 [2024-10-28 18:17:52.599087] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80960 ] 00:27:36.554 [2024-10-28 18:17:52.778982] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:36.554 [2024-10-28 18:17:52.882273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:37.926  [2024-10-28T18:17:55.337Z] Copying: 483/1024 [MB] (483 MBps) [2024-10-28T18:17:55.910Z] Copying: 865/1024 [MB] (382 MBps) [2024-10-28T18:17:56.843Z] Copying: 1024/1024 [MB] (average 431 MBps) 00:27:40.365 00:27:40.365 18:17:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:27:40.365 18:17:56 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:42.293 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:42.293 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=e4fcd7eb3288eeb38af1afccb38f958d 00:27:42.293 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:42.293 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:42.293 Fill FTL, iteration 2 00:27:42.293 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:27:42.293 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:42.293 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:42.293 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:42.293 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:42.293 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:42.293 18:17:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:27:42.551 [2024-10-28 18:17:58.892380] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:27:42.551 [2024-10-28 18:17:58.892615] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81022 ] 00:27:42.808 [2024-10-28 18:17:59.091515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.808 [2024-10-28 18:17:59.270593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:44.708  [2024-10-28T18:18:01.752Z] Copying: 178/1024 [MB] (178 MBps) [2024-10-28T18:18:03.140Z] Copying: 346/1024 [MB] (168 MBps) [2024-10-28T18:18:03.728Z] Copying: 512/1024 [MB] (166 MBps) [2024-10-28T18:18:05.101Z] Copying: 691/1024 [MB] (179 MBps) [2024-10-28T18:18:05.667Z] Copying: 860/1024 [MB] (169 MBps) [2024-10-28T18:18:07.038Z] Copying: 1024/1024 [MB] (average 172 MBps) 00:27:50.560 00:27:50.560 18:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:27:50.560 Calculate MD5 checksum, iteration 2 00:27:50.560 18:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:27:50.560 18:18:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:50.560 18:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:27:50.560 18:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:27:50.560 18:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:27:50.560 18:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:27:50.560 18:18:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:27:50.818 [2024-10-28 18:18:07.097468] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:27:50.818 [2024-10-28 18:18:07.097733] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81103 ] 00:27:51.076 [2024-10-28 18:18:07.310080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:51.076 [2024-10-28 18:18:07.489859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.976  [2024-10-28T18:18:10.825Z] Copying: 425/1024 [MB] (425 MBps) [2024-10-28T18:18:11.083Z] Copying: 806/1024 [MB] (381 MBps) [2024-10-28T18:18:12.982Z] Copying: 1024/1024 [MB] (average 403 MBps) 00:27:56.504 00:27:56.504 18:18:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:27:56.504 18:18:12 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:27:59.787 18:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:27:59.787 18:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=c329ab633faabebef0ad3bbc9d9f6ab4 00:27:59.787 18:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:27:59.787 18:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:27:59.787 18:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:27:59.787 [2024-10-28 18:18:15.933502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.787 [2024-10-28 18:18:15.933594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:27:59.787 [2024-10-28 18:18:15.933635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:27:59.787 [2024-10-28 18:18:15.933655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.787 [2024-10-28 18:18:15.933722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.787 [2024-10-28 18:18:15.933750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:27:59.787 [2024-10-28 18:18:15.933771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:27:59.787 [2024-10-28 18:18:15.933800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.787 [2024-10-28 18:18:15.933869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:27:59.787 [2024-10-28 18:18:15.933897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:27:59.787 [2024-10-28 18:18:15.933918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:27:59.788 [2024-10-28 18:18:15.933932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:27:59.788 [2024-10-28 18:18:15.934022] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.545 ms, result 0 00:27:59.788 true 00:27:59.788 18:18:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:27:59.788 { 00:27:59.788 "name": "ftl", 00:27:59.788 "properties": [ 00:27:59.788 { 00:27:59.788 "name": "superblock_version", 00:27:59.788 "value": 5, 00:27:59.788 "read-only": true 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "name": "base_device", 00:27:59.788 "bands": [ 00:27:59.788 { 00:27:59.788 "id": 0, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 1, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 2, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 3, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 4, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 5, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 6, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 7, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 8, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 9, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 10, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 11, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 12, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 13, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 14, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 15, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 16, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 17, 00:27:59.788 "state": "FREE", 00:27:59.788 "validity": 0.0 00:27:59.788 } 00:27:59.788 ], 00:27:59.788 "read-only": true 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "name": "cache_device", 00:27:59.788 "type": "bdev", 00:27:59.788 "chunks": [ 00:27:59.788 { 00:27:59.788 "id": 0, 00:27:59.788 "state": "INACTIVE", 00:27:59.788 "utilization": 0.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 1, 00:27:59.788 "state": "CLOSED", 00:27:59.788 "utilization": 1.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 2, 00:27:59.788 "state": "CLOSED", 00:27:59.788 "utilization": 1.0 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 3, 00:27:59.788 "state": "OPEN", 00:27:59.788 "utilization": 0.001953125 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "id": 4, 00:27:59.788 "state": "OPEN", 00:27:59.788 "utilization": 0.0 00:27:59.788 } 00:27:59.788 ], 00:27:59.788 "read-only": true 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "name": "verbose_mode", 00:27:59.788 "value": true, 00:27:59.788 "unit": "", 00:27:59.788 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:27:59.788 }, 00:27:59.788 { 00:27:59.788 "name": "prep_upgrade_on_shutdown", 00:27:59.788 "value": false, 00:27:59.788 "unit": "", 00:27:59.788 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:27:59.788 } 00:27:59.788 ] 00:27:59.788 } 00:27:59.788 18:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:28:00.354 [2024-10-28 18:18:16.650559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.354 [2024-10-28 18:18:16.650639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:00.354 [2024-10-28 18:18:16.650664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:00.354 [2024-10-28 18:18:16.650676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.354 [2024-10-28 18:18:16.650718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.354 [2024-10-28 18:18:16.650735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:00.354 [2024-10-28 18:18:16.650747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:00.354 [2024-10-28 18:18:16.650759] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.354 [2024-10-28 18:18:16.650811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.354 [2024-10-28 18:18:16.650853] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:00.354 [2024-10-28 18:18:16.650872] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:00.354 [2024-10-28 18:18:16.650883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.354 [2024-10-28 18:18:16.650969] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.410 ms, result 0 00:28:00.354 true 00:28:00.354 18:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:00.354 18:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:28:00.354 18:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:00.612 18:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:28:00.612 18:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:28:00.612 18:18:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:00.869 [2024-10-28 18:18:17.319445] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.869 [2024-10-28 18:18:17.319541] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:00.869 [2024-10-28 18:18:17.319564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.008 ms 00:28:00.869 [2024-10-28 18:18:17.319577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.869 [2024-10-28 18:18:17.319618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.869 [2024-10-28 18:18:17.319636] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:00.869 [2024-10-28 18:18:17.319649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:00.869 [2024-10-28 18:18:17.319661] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.869 [2024-10-28 18:18:17.319690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:00.869 [2024-10-28 18:18:17.319705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:00.869 [2024-10-28 18:18:17.319717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:00.869 [2024-10-28 18:18:17.319729] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:00.869 [2024-10-28 18:18:17.319827] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.351 ms, result 0 00:28:00.869 true 00:28:01.127 18:18:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:01.386 { 00:28:01.386 "name": "ftl", 00:28:01.386 "properties": [ 00:28:01.386 { 00:28:01.386 "name": "superblock_version", 00:28:01.386 "value": 5, 00:28:01.386 "read-only": true 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "name": "base_device", 00:28:01.386 "bands": [ 00:28:01.386 { 00:28:01.386 "id": 0, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 1, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 2, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 3, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 4, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 5, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 6, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 7, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 8, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 9, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 10, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 11, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 12, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 13, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 14, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 15, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 16, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 17, 00:28:01.386 "state": "FREE", 00:28:01.386 "validity": 0.0 00:28:01.386 } 00:28:01.386 ], 00:28:01.386 "read-only": true 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "name": "cache_device", 00:28:01.386 "type": "bdev", 00:28:01.386 "chunks": [ 00:28:01.386 { 00:28:01.386 "id": 0, 00:28:01.386 "state": "INACTIVE", 00:28:01.386 "utilization": 0.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 1, 00:28:01.386 "state": "CLOSED", 00:28:01.386 "utilization": 1.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 2, 00:28:01.386 "state": "CLOSED", 00:28:01.386 "utilization": 1.0 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 3, 00:28:01.386 "state": "OPEN", 00:28:01.386 "utilization": 0.001953125 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "id": 4, 00:28:01.386 "state": "OPEN", 00:28:01.386 "utilization": 0.0 00:28:01.386 } 00:28:01.386 ], 00:28:01.386 "read-only": true 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "name": "verbose_mode", 00:28:01.386 "value": true, 00:28:01.386 "unit": "", 00:28:01.386 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:01.386 }, 00:28:01.386 { 00:28:01.386 "name": "prep_upgrade_on_shutdown", 00:28:01.386 "value": true, 00:28:01.386 "unit": "", 00:28:01.386 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:01.386 } 00:28:01.386 ] 00:28:01.386 } 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 80691 ]] 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 80691 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 80691 ']' 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 80691 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 80691 00:28:01.386 killing process with pid 80691 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 80691' 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 80691 00:28:01.386 18:18:17 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 80691 00:28:02.823 [2024-10-28 18:18:18.843431] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:02.823 [2024-10-28 18:18:18.861402] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.823 [2024-10-28 18:18:18.861488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:02.823 [2024-10-28 18:18:18.861510] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:02.823 [2024-10-28 18:18:18.861523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:02.823 [2024-10-28 18:18:18.861559] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:02.823 [2024-10-28 18:18:18.865269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:02.823 [2024-10-28 18:18:18.865321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:02.823 [2024-10-28 18:18:18.865340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.683 ms 00:28:02.823 [2024-10-28 18:18:18.865352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.792 [2024-10-28 18:18:27.994494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.792 [2024-10-28 18:18:27.994580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:12.792 [2024-10-28 18:18:27.994602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9129.146 ms 00:28:12.792 [2024-10-28 18:18:27.994622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.792 [2024-10-28 18:18:27.995879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.792 [2024-10-28 18:18:27.995920] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:12.792 [2024-10-28 18:18:27.995937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.231 ms 00:28:12.792 [2024-10-28 18:18:27.995950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.792 [2024-10-28 18:18:27.997200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.792 [2024-10-28 18:18:27.997255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:12.792 [2024-10-28 18:18:27.997271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.206 ms 00:28:12.792 [2024-10-28 18:18:27.997283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.792 [2024-10-28 18:18:28.010091] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.792 [2024-10-28 18:18:28.010148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:12.792 [2024-10-28 18:18:28.010166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.753 ms 00:28:12.792 [2024-10-28 18:18:28.010179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.792 [2024-10-28 18:18:28.018064] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.792 [2024-10-28 18:18:28.018113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:12.792 [2024-10-28 18:18:28.018131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.837 ms 00:28:12.792 [2024-10-28 18:18:28.018144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.792 [2024-10-28 18:18:28.018274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.792 [2024-10-28 18:18:28.018297] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:12.792 [2024-10-28 18:18:28.018318] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.084 ms 00:28:12.792 [2024-10-28 18:18:28.018330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.792 [2024-10-28 18:18:28.030714] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.792 [2024-10-28 18:18:28.030772] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:12.792 [2024-10-28 18:18:28.030790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.359 ms 00:28:12.792 [2024-10-28 18:18:28.030802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.792 [2024-10-28 18:18:28.043151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.792 [2024-10-28 18:18:28.043207] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:12.792 [2024-10-28 18:18:28.043225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.289 ms 00:28:12.792 [2024-10-28 18:18:28.043236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.792 [2024-10-28 18:18:28.055481] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.792 [2024-10-28 18:18:28.055537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:12.792 [2024-10-28 18:18:28.055555] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.195 ms 00:28:12.792 [2024-10-28 18:18:28.055567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.792 [2024-10-28 18:18:28.067936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.792 [2024-10-28 18:18:28.067990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:12.792 [2024-10-28 18:18:28.068006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.255 ms 00:28:12.792 [2024-10-28 18:18:28.068018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.792 [2024-10-28 18:18:28.068062] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:12.792 [2024-10-28 18:18:28.068089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:12.792 [2024-10-28 18:18:28.068104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:12.792 [2024-10-28 18:18:28.068137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:12.792 [2024-10-28 18:18:28.068151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:12.792 [2024-10-28 18:18:28.068163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:12.792 [2024-10-28 18:18:28.068176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:12.792 [2024-10-28 18:18:28.068188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:12.792 [2024-10-28 18:18:28.068201] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:12.792 [2024-10-28 18:18:28.068213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:12.792 [2024-10-28 18:18:28.068225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:12.792 [2024-10-28 18:18:28.068238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:12.793 [2024-10-28 18:18:28.068250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:12.793 [2024-10-28 18:18:28.068262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:12.793 [2024-10-28 18:18:28.068274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:12.793 [2024-10-28 18:18:28.068286] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:12.793 [2024-10-28 18:18:28.068298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:12.793 [2024-10-28 18:18:28.068314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:12.793 [2024-10-28 18:18:28.068327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:12.793 [2024-10-28 18:18:28.068342] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:12.793 [2024-10-28 18:18:28.068353] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 32bdcf0b-f3ee-4351-9671-39840621531e 00:28:12.793 [2024-10-28 18:18:28.068366] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:12.793 [2024-10-28 18:18:28.068377] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:28:12.793 [2024-10-28 18:18:28.068388] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:28:12.793 [2024-10-28 18:18:28.068400] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:28:12.793 [2024-10-28 18:18:28.068411] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:12.793 [2024-10-28 18:18:28.068429] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:12.793 [2024-10-28 18:18:28.068441] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:12.793 [2024-10-28 18:18:28.068452] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:12.793 [2024-10-28 18:18:28.068463] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:12.793 [2024-10-28 18:18:28.068475] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.793 [2024-10-28 18:18:28.068490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:12.793 [2024-10-28 18:18:28.068503] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.414 ms 00:28:12.793 [2024-10-28 18:18:28.068515] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.085309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.793 [2024-10-28 18:18:28.085376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:12.793 [2024-10-28 18:18:28.085395] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.742 ms 00:28:12.793 [2024-10-28 18:18:28.085419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.085901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:12.793 [2024-10-28 18:18:28.085927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:12.793 [2024-10-28 18:18:28.085941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.432 ms 00:28:12.793 [2024-10-28 18:18:28.085953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.140951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.141028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:12.793 [2024-10-28 18:18:28.141055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.141076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.141147] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.141171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:12.793 [2024-10-28 18:18:28.141184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.141196] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.141340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.141361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:12.793 [2024-10-28 18:18:28.141374] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.141387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.141419] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.141434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:12.793 [2024-10-28 18:18:28.141446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.141457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.245263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.245337] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:12.793 [2024-10-28 18:18:28.245357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.245380] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.331037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.331115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:12.793 [2024-10-28 18:18:28.331135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.331148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.331280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.331311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:12.793 [2024-10-28 18:18:28.331324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.331336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.331411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.331431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:12.793 [2024-10-28 18:18:28.331444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.331456] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.331592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.331624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:12.793 [2024-10-28 18:18:28.331639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.331651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.331705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.331729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:12.793 [2024-10-28 18:18:28.331742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.331754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.331799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.331827] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:12.793 [2024-10-28 18:18:28.331866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.331880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.331955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:12.793 [2024-10-28 18:18:28.331974] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:12.793 [2024-10-28 18:18:28.331987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:12.793 [2024-10-28 18:18:28.331999] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:12.793 [2024-10-28 18:18:28.332143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 9470.779 ms, result 0 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81345 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81345 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81345 ']' 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:15.325 18:18:31 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:15.325 [2024-10-28 18:18:31.798578] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:28:15.325 [2024-10-28 18:18:31.798768] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81345 ] 00:28:15.586 [2024-10-28 18:18:31.982859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:15.851 [2024-10-28 18:18:32.089042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:16.787 [2024-10-28 18:18:32.941538] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:16.787 [2024-10-28 18:18:32.941619] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:16.787 [2024-10-28 18:18:33.090075] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.090180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:16.787 [2024-10-28 18:18:33.090201] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:16.787 [2024-10-28 18:18:33.090213] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.090289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.090308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:16.787 [2024-10-28 18:18:33.090321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.044 ms 00:28:16.787 [2024-10-28 18:18:33.090333] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.090375] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:16.787 [2024-10-28 18:18:33.091347] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:16.787 [2024-10-28 18:18:33.091391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.091405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:16.787 [2024-10-28 18:18:33.091419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.030 ms 00:28:16.787 [2024-10-28 18:18:33.091430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.092695] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:16.787 [2024-10-28 18:18:33.109842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.109907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:16.787 [2024-10-28 18:18:33.109947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.148 ms 00:28:16.787 [2024-10-28 18:18:33.109959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.110050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.110070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:16.787 [2024-10-28 18:18:33.110082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:28:16.787 [2024-10-28 18:18:33.110093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.114755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.114822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:16.787 [2024-10-28 18:18:33.114864] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.523 ms 00:28:16.787 [2024-10-28 18:18:33.114877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.114984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.115005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:16.787 [2024-10-28 18:18:33.115019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:28:16.787 [2024-10-28 18:18:33.115040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.115109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.115127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:16.787 [2024-10-28 18:18:33.115147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:16.787 [2024-10-28 18:18:33.115164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.115203] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:16.787 [2024-10-28 18:18:33.119778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.119870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:16.787 [2024-10-28 18:18:33.119889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.586 ms 00:28:16.787 [2024-10-28 18:18:33.119908] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.119946] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.119962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:16.787 [2024-10-28 18:18:33.119974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:16.787 [2024-10-28 18:18:33.119985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.120038] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:16.787 [2024-10-28 18:18:33.120071] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:16.787 [2024-10-28 18:18:33.120120] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:16.787 [2024-10-28 18:18:33.120139] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:16.787 [2024-10-28 18:18:33.120254] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:16.787 [2024-10-28 18:18:33.120270] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:16.787 [2024-10-28 18:18:33.120286] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:16.787 [2024-10-28 18:18:33.120301] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:16.787 [2024-10-28 18:18:33.120316] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:16.787 [2024-10-28 18:18:33.120333] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:16.787 [2024-10-28 18:18:33.120345] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:16.787 [2024-10-28 18:18:33.120355] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:16.787 [2024-10-28 18:18:33.120366] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:16.787 [2024-10-28 18:18:33.120378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.120389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:16.787 [2024-10-28 18:18:33.120401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.344 ms 00:28:16.787 [2024-10-28 18:18:33.120413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.120537] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.787 [2024-10-28 18:18:33.120555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:16.787 [2024-10-28 18:18:33.120567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:28:16.787 [2024-10-28 18:18:33.120585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.787 [2024-10-28 18:18:33.120705] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:16.787 [2024-10-28 18:18:33.120738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:16.787 [2024-10-28 18:18:33.120753] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:16.787 [2024-10-28 18:18:33.120765] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.788 [2024-10-28 18:18:33.120777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:16.788 [2024-10-28 18:18:33.120788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:16.788 [2024-10-28 18:18:33.120800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:16.788 [2024-10-28 18:18:33.120811] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:16.788 [2024-10-28 18:18:33.120832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:16.788 [2024-10-28 18:18:33.120858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.788 [2024-10-28 18:18:33.120870] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:16.788 [2024-10-28 18:18:33.120881] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:16.788 [2024-10-28 18:18:33.120892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.788 [2024-10-28 18:18:33.120903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:16.788 [2024-10-28 18:18:33.120913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:16.788 [2024-10-28 18:18:33.120925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.788 [2024-10-28 18:18:33.120936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:16.788 [2024-10-28 18:18:33.120947] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:16.788 [2024-10-28 18:18:33.120957] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.788 [2024-10-28 18:18:33.120968] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:16.788 [2024-10-28 18:18:33.120979] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:16.788 [2024-10-28 18:18:33.120990] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:16.788 [2024-10-28 18:18:33.121000] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:16.788 [2024-10-28 18:18:33.121012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:16.788 [2024-10-28 18:18:33.121023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:16.788 [2024-10-28 18:18:33.121048] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:16.788 [2024-10-28 18:18:33.121059] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:16.788 [2024-10-28 18:18:33.121070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:16.788 [2024-10-28 18:18:33.121081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:16.788 [2024-10-28 18:18:33.121091] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:16.788 [2024-10-28 18:18:33.121102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:16.788 [2024-10-28 18:18:33.121113] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:16.788 [2024-10-28 18:18:33.121124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:16.788 [2024-10-28 18:18:33.121135] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.788 [2024-10-28 18:18:33.121145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:16.788 [2024-10-28 18:18:33.121156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:16.788 [2024-10-28 18:18:33.121167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.788 [2024-10-28 18:18:33.121178] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:16.788 [2024-10-28 18:18:33.121189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:16.788 [2024-10-28 18:18:33.121199] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.788 [2024-10-28 18:18:33.121210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:16.788 [2024-10-28 18:18:33.121220] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:16.788 [2024-10-28 18:18:33.121231] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.788 [2024-10-28 18:18:33.121242] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:16.788 [2024-10-28 18:18:33.121254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:16.788 [2024-10-28 18:18:33.121265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:16.788 [2024-10-28 18:18:33.121276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:16.788 [2024-10-28 18:18:33.121295] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:16.788 [2024-10-28 18:18:33.121307] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:16.788 [2024-10-28 18:18:33.121318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:16.788 [2024-10-28 18:18:33.121329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:16.788 [2024-10-28 18:18:33.121350] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:16.788 [2024-10-28 18:18:33.121361] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:16.788 [2024-10-28 18:18:33.121374] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:16.788 [2024-10-28 18:18:33.121388] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:16.788 [2024-10-28 18:18:33.121401] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:16.788 [2024-10-28 18:18:33.121412] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:16.788 [2024-10-28 18:18:33.121424] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:16.788 [2024-10-28 18:18:33.121435] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:16.788 [2024-10-28 18:18:33.121446] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:16.788 [2024-10-28 18:18:33.121457] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:16.788 [2024-10-28 18:18:33.121468] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:16.788 [2024-10-28 18:18:33.121480] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:16.788 [2024-10-28 18:18:33.121491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:16.788 [2024-10-28 18:18:33.121504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:16.788 [2024-10-28 18:18:33.121516] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:16.788 [2024-10-28 18:18:33.121527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:16.788 [2024-10-28 18:18:33.121538] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:16.788 [2024-10-28 18:18:33.121550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:16.788 [2024-10-28 18:18:33.121561] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:16.788 [2024-10-28 18:18:33.121574] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:16.788 [2024-10-28 18:18:33.121587] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:16.788 [2024-10-28 18:18:33.121598] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:16.788 [2024-10-28 18:18:33.121610] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:16.788 [2024-10-28 18:18:33.121621] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:16.788 [2024-10-28 18:18:33.121634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:16.788 [2024-10-28 18:18:33.121645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:16.788 [2024-10-28 18:18:33.121657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.000 ms 00:28:16.788 [2024-10-28 18:18:33.121668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:16.788 [2024-10-28 18:18:33.121730] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:28:16.788 [2024-10-28 18:18:33.121757] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:28:19.317 [2024-10-28 18:18:35.343561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.343635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:28:19.317 [2024-10-28 18:18:35.343657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2221.844 ms 00:28:19.317 [2024-10-28 18:18:35.343679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.377163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.377265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:19.317 [2024-10-28 18:18:35.377303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 33.126 ms 00:28:19.317 [2024-10-28 18:18:35.377315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.377469] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.377497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:19.317 [2024-10-28 18:18:35.377511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:19.317 [2024-10-28 18:18:35.377522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.419188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.419269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:19.317 [2024-10-28 18:18:35.419305] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.599 ms 00:28:19.317 [2024-10-28 18:18:35.419323] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.419403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.419420] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:19.317 [2024-10-28 18:18:35.419433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:19.317 [2024-10-28 18:18:35.419444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.419884] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.419914] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:19.317 [2024-10-28 18:18:35.419929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.326 ms 00:28:19.317 [2024-10-28 18:18:35.419941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.420012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.420028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:19.317 [2024-10-28 18:18:35.420041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:28:19.317 [2024-10-28 18:18:35.420052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.438540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.438613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:19.317 [2024-10-28 18:18:35.438645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.455 ms 00:28:19.317 [2024-10-28 18:18:35.438658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.456287] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:28:19.317 [2024-10-28 18:18:35.456383] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:19.317 [2024-10-28 18:18:35.456406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.456418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:28:19.317 [2024-10-28 18:18:35.456434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.503 ms 00:28:19.317 [2024-10-28 18:18:35.456445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.476202] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.476309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:28:19.317 [2024-10-28 18:18:35.476330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.640 ms 00:28:19.317 [2024-10-28 18:18:35.476343] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.492526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.492608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:28:19.317 [2024-10-28 18:18:35.492628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.083 ms 00:28:19.317 [2024-10-28 18:18:35.492655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.508949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.509046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:28:19.317 [2024-10-28 18:18:35.509085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.183 ms 00:28:19.317 [2024-10-28 18:18:35.509096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.509986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.510027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:19.317 [2024-10-28 18:18:35.510043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.704 ms 00:28:19.317 [2024-10-28 18:18:35.510055] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.596224] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.317 [2024-10-28 18:18:35.596293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:19.317 [2024-10-28 18:18:35.596314] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 86.131 ms 00:28:19.317 [2024-10-28 18:18:35.596327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.317 [2024-10-28 18:18:35.609780] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:19.318 [2024-10-28 18:18:35.610654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.318 [2024-10-28 18:18:35.610704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:19.318 [2024-10-28 18:18:35.610725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.237 ms 00:28:19.318 [2024-10-28 18:18:35.610738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.318 [2024-10-28 18:18:35.610886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.318 [2024-10-28 18:18:35.610912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:28:19.318 [2024-10-28 18:18:35.610926] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:28:19.318 [2024-10-28 18:18:35.610938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.318 [2024-10-28 18:18:35.611024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.318 [2024-10-28 18:18:35.611044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:19.318 [2024-10-28 18:18:35.611057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:28:19.318 [2024-10-28 18:18:35.611069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.318 [2024-10-28 18:18:35.611105] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.318 [2024-10-28 18:18:35.611120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:19.318 [2024-10-28 18:18:35.611132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:19.318 [2024-10-28 18:18:35.611149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.318 [2024-10-28 18:18:35.611190] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:19.318 [2024-10-28 18:18:35.611206] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.318 [2024-10-28 18:18:35.611217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:19.318 [2024-10-28 18:18:35.611229] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:28:19.318 [2024-10-28 18:18:35.611240] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.318 [2024-10-28 18:18:35.645740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.318 [2024-10-28 18:18:35.645810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:28:19.318 [2024-10-28 18:18:35.645844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.467 ms 00:28:19.318 [2024-10-28 18:18:35.645867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.318 [2024-10-28 18:18:35.645961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:19.318 [2024-10-28 18:18:35.645979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:19.318 [2024-10-28 18:18:35.645992] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:19.318 [2024-10-28 18:18:35.646003] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:19.318 [2024-10-28 18:18:35.647401] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 2556.744 ms, result 0 00:28:19.318 [2024-10-28 18:18:35.662228] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.318 [2024-10-28 18:18:35.678241] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:19.318 [2024-10-28 18:18:35.687452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:20.251 18:18:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:20.251 18:18:36 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:20.251 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:20.251 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:20.251 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:28:20.251 [2024-10-28 18:18:36.664533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.251 [2024-10-28 18:18:36.664600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:28:20.251 [2024-10-28 18:18:36.664619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:20.251 [2024-10-28 18:18:36.664635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.251 [2024-10-28 18:18:36.664668] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.251 [2024-10-28 18:18:36.664683] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:28:20.251 [2024-10-28 18:18:36.664695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:28:20.251 [2024-10-28 18:18:36.664706] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.251 [2024-10-28 18:18:36.664731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:20.251 [2024-10-28 18:18:36.664744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:28:20.251 [2024-10-28 18:18:36.664755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:28:20.251 [2024-10-28 18:18:36.664765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:20.251 [2024-10-28 18:18:36.664839] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.291 ms, result 0 00:28:20.251 true 00:28:20.251 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:20.511 { 00:28:20.511 "name": "ftl", 00:28:20.511 "properties": [ 00:28:20.511 { 00:28:20.511 "name": "superblock_version", 00:28:20.511 "value": 5, 00:28:20.511 "read-only": true 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "name": "base_device", 00:28:20.511 "bands": [ 00:28:20.511 { 00:28:20.511 "id": 0, 00:28:20.511 "state": "CLOSED", 00:28:20.511 "validity": 1.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 1, 00:28:20.511 "state": "CLOSED", 00:28:20.511 "validity": 1.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 2, 00:28:20.511 "state": "CLOSED", 00:28:20.511 "validity": 0.007843137254901933 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 3, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 4, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 5, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 6, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 7, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 8, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 9, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 10, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 11, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 12, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 13, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 14, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 15, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 16, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 17, 00:28:20.511 "state": "FREE", 00:28:20.511 "validity": 0.0 00:28:20.511 } 00:28:20.511 ], 00:28:20.511 "read-only": true 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "name": "cache_device", 00:28:20.511 "type": "bdev", 00:28:20.511 "chunks": [ 00:28:20.511 { 00:28:20.511 "id": 0, 00:28:20.511 "state": "INACTIVE", 00:28:20.511 "utilization": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 1, 00:28:20.511 "state": "OPEN", 00:28:20.511 "utilization": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 2, 00:28:20.511 "state": "OPEN", 00:28:20.511 "utilization": 0.0 00:28:20.511 }, 00:28:20.511 { 00:28:20.511 "id": 3, 00:28:20.511 "state": "FREE", 00:28:20.512 "utilization": 0.0 00:28:20.512 }, 00:28:20.512 { 00:28:20.512 "id": 4, 00:28:20.512 "state": "FREE", 00:28:20.512 "utilization": 0.0 00:28:20.512 } 00:28:20.512 ], 00:28:20.512 "read-only": true 00:28:20.512 }, 00:28:20.512 { 00:28:20.512 "name": "verbose_mode", 00:28:20.512 "value": true, 00:28:20.512 "unit": "", 00:28:20.512 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:28:20.512 }, 00:28:20.512 { 00:28:20.512 "name": "prep_upgrade_on_shutdown", 00:28:20.512 "value": false, 00:28:20.512 "unit": "", 00:28:20.512 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:28:20.512 } 00:28:20.512 ] 00:28:20.512 } 00:28:20.512 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:28:20.512 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:28:20.512 18:18:36 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:21.078 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:28:21.078 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:28:21.078 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:28:21.078 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:28:21.078 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:28:21.336 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:28:21.336 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:28:21.336 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:28:21.336 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:21.336 Validate MD5 checksum, iteration 1 00:28:21.336 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:21.336 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:21.336 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:21.336 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:21.336 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:21.337 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:21.337 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:21.337 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:21.337 18:18:37 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:21.337 [2024-10-28 18:18:37.725633] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:28:21.337 [2024-10-28 18:18:37.725795] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81419 ] 00:28:21.594 [2024-10-28 18:18:37.910884] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.594 [2024-10-28 18:18:38.012379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.495  [2024-10-28T18:18:40.908Z] Copying: 499/1024 [MB] (499 MBps) [2024-10-28T18:18:40.908Z] Copying: 960/1024 [MB] (461 MBps) [2024-10-28T18:18:42.283Z] Copying: 1024/1024 [MB] (average 478 MBps) 00:28:25.805 00:28:25.805 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:25.805 18:18:42 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:28.333 Validate MD5 checksum, iteration 2 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e4fcd7eb3288eeb38af1afccb38f958d 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e4fcd7eb3288eeb38af1afccb38f958d != \e\4\f\c\d\7\e\b\3\2\8\8\e\e\b\3\8\a\f\1\a\f\c\c\b\3\8\f\9\5\8\d ]] 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:28.333 18:18:44 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:28.333 [2024-10-28 18:18:44.420808] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:28:28.333 [2024-10-28 18:18:44.420973] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81491 ] 00:28:28.333 [2024-10-28 18:18:44.593823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.333 [2024-10-28 18:18:44.713560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.230  [2024-10-28T18:18:47.641Z] Copying: 488/1024 [MB] (488 MBps) [2024-10-28T18:18:47.641Z] Copying: 987/1024 [MB] (499 MBps) [2024-10-28T18:18:48.574Z] Copying: 1024/1024 [MB] (average 492 MBps) 00:28:32.096 00:28:32.096 18:18:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:32.096 18:18:48 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c329ab633faabebef0ad3bbc9d9f6ab4 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c329ab633faabebef0ad3bbc9d9f6ab4 != \c\3\2\9\a\b\6\3\3\f\a\a\b\e\b\e\f\0\a\d\3\b\b\c\9\d\9\f\6\a\b\4 ]] 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 81345 ]] 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 81345 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=81554 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 81554 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@833 -- # '[' -z 81554 ']' 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:34.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:34.653 18:18:50 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:34.653 [2024-10-28 18:18:50.879394] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:28:34.653 [2024-10-28 18:18:50.879549] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81554 ] 00:28:34.653 [2024-10-28 18:18:51.054263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:34.653 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: 81345 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:28:34.911 [2024-10-28 18:18:51.175447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:35.846 [2024-10-28 18:18:52.034305] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:35.846 [2024-10-28 18:18:52.034391] bdev.c:8271:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:28:35.846 [2024-10-28 18:18:52.183008] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.846 [2024-10-28 18:18:52.183077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:28:35.846 [2024-10-28 18:18:52.183097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:28:35.846 [2024-10-28 18:18:52.183109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.846 [2024-10-28 18:18:52.183193] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.846 [2024-10-28 18:18:52.183212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:35.846 [2024-10-28 18:18:52.183224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.051 ms 00:28:35.846 [2024-10-28 18:18:52.183236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.846 [2024-10-28 18:18:52.183278] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:28:35.846 [2024-10-28 18:18:52.184271] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:28:35.846 [2024-10-28 18:18:52.184312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.846 [2024-10-28 18:18:52.184325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:35.846 [2024-10-28 18:18:52.184337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.049 ms 00:28:35.846 [2024-10-28 18:18:52.184349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.846 [2024-10-28 18:18:52.184932] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:28:35.846 [2024-10-28 18:18:52.205441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.846 [2024-10-28 18:18:52.205511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:28:35.847 [2024-10-28 18:18:52.205530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 20.506 ms 00:28:35.847 [2024-10-28 18:18:52.205543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.847 [2024-10-28 18:18:52.218100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.847 [2024-10-28 18:18:52.218190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:28:35.847 [2024-10-28 18:18:52.218216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.060 ms 00:28:35.847 [2024-10-28 18:18:52.218229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.847 [2024-10-28 18:18:52.218824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.847 [2024-10-28 18:18:52.218867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:35.847 [2024-10-28 18:18:52.218883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.437 ms 00:28:35.847 [2024-10-28 18:18:52.218895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.847 [2024-10-28 18:18:52.218979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.847 [2024-10-28 18:18:52.219002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:35.847 [2024-10-28 18:18:52.219015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.057 ms 00:28:35.847 [2024-10-28 18:18:52.219027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.847 [2024-10-28 18:18:52.219069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.847 [2024-10-28 18:18:52.219084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:28:35.847 [2024-10-28 18:18:52.219096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.012 ms 00:28:35.847 [2024-10-28 18:18:52.219108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.847 [2024-10-28 18:18:52.219146] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:28:35.847 [2024-10-28 18:18:52.223477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.847 [2024-10-28 18:18:52.223543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:35.847 [2024-10-28 18:18:52.223561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.340 ms 00:28:35.847 [2024-10-28 18:18:52.223572] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.847 [2024-10-28 18:18:52.223634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.847 [2024-10-28 18:18:52.223649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:28:35.847 [2024-10-28 18:18:52.223664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:35.847 [2024-10-28 18:18:52.223675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.847 [2024-10-28 18:18:52.223751] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:28:35.847 [2024-10-28 18:18:52.223786] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:28:35.847 [2024-10-28 18:18:52.223831] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:28:35.847 [2024-10-28 18:18:52.223903] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:28:35.847 [2024-10-28 18:18:52.224035] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:28:35.847 [2024-10-28 18:18:52.224058] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:28:35.847 [2024-10-28 18:18:52.224074] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:28:35.847 [2024-10-28 18:18:52.224090] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:28:35.847 [2024-10-28 18:18:52.224103] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:28:35.847 [2024-10-28 18:18:52.224116] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:28:35.847 [2024-10-28 18:18:52.224127] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:28:35.847 [2024-10-28 18:18:52.224138] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:28:35.847 [2024-10-28 18:18:52.224149] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:28:35.847 [2024-10-28 18:18:52.224162] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.847 [2024-10-28 18:18:52.224179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:28:35.847 [2024-10-28 18:18:52.224191] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.416 ms 00:28:35.847 [2024-10-28 18:18:52.224202] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.847 [2024-10-28 18:18:52.224308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.847 [2024-10-28 18:18:52.224329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:28:35.847 [2024-10-28 18:18:52.224342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:28:35.847 [2024-10-28 18:18:52.224353] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.847 [2024-10-28 18:18:52.224474] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:28:35.847 [2024-10-28 18:18:52.224490] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:28:35.847 [2024-10-28 18:18:52.224508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:35.847 [2024-10-28 18:18:52.224520] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.847 [2024-10-28 18:18:52.224532] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:28:35.847 [2024-10-28 18:18:52.224544] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:28:35.847 [2024-10-28 18:18:52.224556] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:28:35.847 [2024-10-28 18:18:52.224567] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:28:35.847 [2024-10-28 18:18:52.224578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:28:35.847 [2024-10-28 18:18:52.224588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.847 [2024-10-28 18:18:52.224599] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:28:35.847 [2024-10-28 18:18:52.224610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:28:35.847 [2024-10-28 18:18:52.224620] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.847 [2024-10-28 18:18:52.224631] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:28:35.847 [2024-10-28 18:18:52.224641] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:28:35.847 [2024-10-28 18:18:52.224652] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.847 [2024-10-28 18:18:52.224663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:28:35.847 [2024-10-28 18:18:52.224674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:28:35.847 [2024-10-28 18:18:52.224684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.847 [2024-10-28 18:18:52.224695] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:28:35.847 [2024-10-28 18:18:52.224706] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:28:35.847 [2024-10-28 18:18:52.224717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.847 [2024-10-28 18:18:52.224727] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:28:35.847 [2024-10-28 18:18:52.224756] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:28:35.847 [2024-10-28 18:18:52.224767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.847 [2024-10-28 18:18:52.224778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:28:35.847 [2024-10-28 18:18:52.224789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:28:35.847 [2024-10-28 18:18:52.224799] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.847 [2024-10-28 18:18:52.224810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:28:35.847 [2024-10-28 18:18:52.224820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:28:35.847 [2024-10-28 18:18:52.224831] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:28:35.847 [2024-10-28 18:18:52.224859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:28:35.847 [2024-10-28 18:18:52.224871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:28:35.847 [2024-10-28 18:18:52.224882] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.847 [2024-10-28 18:18:52.224895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:28:35.847 [2024-10-28 18:18:52.224905] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:28:35.847 [2024-10-28 18:18:52.224916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.847 [2024-10-28 18:18:52.224927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:28:35.847 [2024-10-28 18:18:52.224938] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:28:35.847 [2024-10-28 18:18:52.224949] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.847 [2024-10-28 18:18:52.224959] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:28:35.847 [2024-10-28 18:18:52.224971] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:28:35.847 [2024-10-28 18:18:52.224982] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.847 [2024-10-28 18:18:52.224992] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:28:35.847 [2024-10-28 18:18:52.225004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:28:35.847 [2024-10-28 18:18:52.225015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:28:35.847 [2024-10-28 18:18:52.225026] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:28:35.847 [2024-10-28 18:18:52.225038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:28:35.847 [2024-10-28 18:18:52.225049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:28:35.847 [2024-10-28 18:18:52.225059] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:28:35.847 [2024-10-28 18:18:52.225070] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:28:35.847 [2024-10-28 18:18:52.225080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:28:35.847 [2024-10-28 18:18:52.225091] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:28:35.847 [2024-10-28 18:18:52.225105] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:28:35.847 [2024-10-28 18:18:52.225119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.847 [2024-10-28 18:18:52.225132] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:28:35.847 [2024-10-28 18:18:52.225144] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:28:35.847 [2024-10-28 18:18:52.225155] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:28:35.847 [2024-10-28 18:18:52.225166] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:28:35.848 [2024-10-28 18:18:52.225178] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:28:35.848 [2024-10-28 18:18:52.225189] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:28:35.848 [2024-10-28 18:18:52.225201] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:28:35.848 [2024-10-28 18:18:52.225212] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:28:35.848 [2024-10-28 18:18:52.225224] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:28:35.848 [2024-10-28 18:18:52.225236] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:28:35.848 [2024-10-28 18:18:52.225248] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:28:35.848 [2024-10-28 18:18:52.225259] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:28:35.848 [2024-10-28 18:18:52.225271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:28:35.848 [2024-10-28 18:18:52.225283] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:28:35.848 [2024-10-28 18:18:52.225295] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:28:35.848 [2024-10-28 18:18:52.225309] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:35.848 [2024-10-28 18:18:52.225322] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:35.848 [2024-10-28 18:18:52.225334] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:28:35.848 [2024-10-28 18:18:52.225345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:28:35.848 [2024-10-28 18:18:52.225357] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:28:35.848 [2024-10-28 18:18:52.225370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.848 [2024-10-28 18:18:52.225387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:28:35.848 [2024-10-28 18:18:52.225399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.967 ms 00:28:35.848 [2024-10-28 18:18:52.225410] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.848 [2024-10-28 18:18:52.257611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.848 [2024-10-28 18:18:52.257675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:35.848 [2024-10-28 18:18:52.257696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.125 ms 00:28:35.848 [2024-10-28 18:18:52.257709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.848 [2024-10-28 18:18:52.257781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.848 [2024-10-28 18:18:52.257798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:28:35.848 [2024-10-28 18:18:52.257810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.015 ms 00:28:35.848 [2024-10-28 18:18:52.257822] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.848 [2024-10-28 18:18:52.298772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.848 [2024-10-28 18:18:52.298879] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:35.848 [2024-10-28 18:18:52.298903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.820 ms 00:28:35.848 [2024-10-28 18:18:52.298915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.848 [2024-10-28 18:18:52.299016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.848 [2024-10-28 18:18:52.299034] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:35.848 [2024-10-28 18:18:52.299048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:28:35.848 [2024-10-28 18:18:52.299059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.848 [2024-10-28 18:18:52.299319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.848 [2024-10-28 18:18:52.299344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:35.848 [2024-10-28 18:18:52.299359] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.123 ms 00:28:35.848 [2024-10-28 18:18:52.299371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.848 [2024-10-28 18:18:52.299435] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.848 [2024-10-28 18:18:52.299452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:35.848 [2024-10-28 18:18:52.299464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.030 ms 00:28:35.848 [2024-10-28 18:18:52.299476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.848 [2024-10-28 18:18:52.318028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.848 [2024-10-28 18:18:52.318091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:35.848 [2024-10-28 18:18:52.318124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 18.512 ms 00:28:35.848 [2024-10-28 18:18:52.318136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:35.848 [2024-10-28 18:18:52.318350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:35.848 [2024-10-28 18:18:52.318373] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:28:35.848 [2024-10-28 18:18:52.318388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:28:35.848 [2024-10-28 18:18:52.318400] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.107 [2024-10-28 18:18:52.359497] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.107 [2024-10-28 18:18:52.359626] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:28:36.107 [2024-10-28 18:18:52.359659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 41.056 ms 00:28:36.107 [2024-10-28 18:18:52.359678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.107 [2024-10-28 18:18:52.375591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.107 [2024-10-28 18:18:52.375668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:28:36.107 [2024-10-28 18:18:52.375699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.688 ms 00:28:36.107 [2024-10-28 18:18:52.375711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.107 [2024-10-28 18:18:52.450043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.107 [2024-10-28 18:18:52.450114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:28:36.107 [2024-10-28 18:18:52.450146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 74.211 ms 00:28:36.107 [2024-10-28 18:18:52.450160] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.107 [2024-10-28 18:18:52.450393] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:28:36.107 [2024-10-28 18:18:52.450559] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:28:36.107 [2024-10-28 18:18:52.450700] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:28:36.107 [2024-10-28 18:18:52.450864] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:28:36.107 [2024-10-28 18:18:52.450885] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.107 [2024-10-28 18:18:52.450898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:28:36.107 [2024-10-28 18:18:52.450911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.637 ms 00:28:36.107 [2024-10-28 18:18:52.450922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.107 [2024-10-28 18:18:52.451056] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:28:36.107 [2024-10-28 18:18:52.451078] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.107 [2024-10-28 18:18:52.451095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:28:36.107 [2024-10-28 18:18:52.451108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.024 ms 00:28:36.107 [2024-10-28 18:18:52.451120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.107 [2024-10-28 18:18:52.470962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.107 [2024-10-28 18:18:52.471037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:28:36.107 [2024-10-28 18:18:52.471056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 19.791 ms 00:28:36.107 [2024-10-28 18:18:52.471068] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.107 [2024-10-28 18:18:52.483233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.107 [2024-10-28 18:18:52.483287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:28:36.107 [2024-10-28 18:18:52.483304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:28:36.107 [2024-10-28 18:18:52.483315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.107 [2024-10-28 18:18:52.483474] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:28:36.107 [2024-10-28 18:18:52.483616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.107 [2024-10-28 18:18:52.483643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:36.107 [2024-10-28 18:18:52.483656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.146 ms 00:28:36.107 [2024-10-28 18:18:52.483668] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.674 [2024-10-28 18:18:52.973780] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.674 [2024-10-28 18:18:52.973869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:36.674 [2024-10-28 18:18:52.973891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 488.906 ms 00:28:36.674 [2024-10-28 18:18:52.973906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.674 [2024-10-28 18:18:52.978614] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.674 [2024-10-28 18:18:52.978657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:36.674 [2024-10-28 18:18:52.978674] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.855 ms 00:28:36.674 [2024-10-28 18:18:52.978686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.674 [2024-10-28 18:18:52.979049] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:28:36.674 [2024-10-28 18:18:52.979085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.674 [2024-10-28 18:18:52.979099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:36.674 [2024-10-28 18:18:52.979111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.350 ms 00:28:36.674 [2024-10-28 18:18:52.979123] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.674 [2024-10-28 18:18:52.979167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.674 [2024-10-28 18:18:52.979186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:36.674 [2024-10-28 18:18:52.979199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:36.674 [2024-10-28 18:18:52.979210] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:36.674 [2024-10-28 18:18:52.979266] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 495.800 ms, result 0 00:28:36.674 [2024-10-28 18:18:52.979324] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:28:36.674 [2024-10-28 18:18:52.979425] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:36.674 [2024-10-28 18:18:52.979439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:28:36.674 [2024-10-28 18:18:52.979451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.102 ms 00:28:36.674 [2024-10-28 18:18:52.979461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.473933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.474014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:28:37.257 [2024-10-28 18:18:53.474037] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 493.339 ms 00:28:37.257 [2024-10-28 18:18:53.474049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.478692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.478736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:28:37.257 [2024-10-28 18:18:53.478753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.722 ms 00:28:37.257 [2024-10-28 18:18:53.478764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.479176] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:28:37.257 [2024-10-28 18:18:53.479210] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.479223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:28:37.257 [2024-10-28 18:18:53.479236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.405 ms 00:28:37.257 [2024-10-28 18:18:53.479247] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.479329] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.479348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:28:37.257 [2024-10-28 18:18:53.479360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:37.257 [2024-10-28 18:18:53.479370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.479455] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 500.127 ms, result 0 00:28:37.257 [2024-10-28 18:18:53.479512] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:37.257 [2024-10-28 18:18:53.479529] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:28:37.257 [2024-10-28 18:18:53.479542] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.479554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:28:37.257 [2024-10-28 18:18:53.479566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 996.105 ms 00:28:37.257 [2024-10-28 18:18:53.479577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.479618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.479633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:28:37.257 [2024-10-28 18:18:53.479652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:28:37.257 [2024-10-28 18:18:53.479663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.492337] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:28:37.257 [2024-10-28 18:18:53.492544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.492565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:28:37.257 [2024-10-28 18:18:53.492582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.856 ms 00:28:37.257 [2024-10-28 18:18:53.492593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.493403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.493433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:28:37.257 [2024-10-28 18:18:53.493453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.660 ms 00:28:37.257 [2024-10-28 18:18:53.493465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.495989] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.496017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:28:37.257 [2024-10-28 18:18:53.496030] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.491 ms 00:28:37.257 [2024-10-28 18:18:53.496041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.496122] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.496153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:28:37.257 [2024-10-28 18:18:53.496166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:28:37.257 [2024-10-28 18:18:53.496184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.496327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.496351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:28:37.257 [2024-10-28 18:18:53.496364] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:37.257 [2024-10-28 18:18:53.496375] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.496405] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.257 [2024-10-28 18:18:53.496419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:28:37.257 [2024-10-28 18:18:53.496431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:28:37.257 [2024-10-28 18:18:53.496443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.257 [2024-10-28 18:18:53.496485] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:28:37.257 [2024-10-28 18:18:53.496505] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.258 [2024-10-28 18:18:53.496516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:28:37.258 [2024-10-28 18:18:53.496529] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.022 ms 00:28:37.258 [2024-10-28 18:18:53.496540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.258 [2024-10-28 18:18:53.496606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:37.258 [2024-10-28 18:18:53.496622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:28:37.258 [2024-10-28 18:18:53.496635] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.041 ms 00:28:37.258 [2024-10-28 18:18:53.496646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:37.258 [2024-10-28 18:18:53.497828] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1314.307 ms, result 0 00:28:37.258 [2024-10-28 18:18:53.513264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.258 [2024-10-28 18:18:53.529276] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:28:37.258 [2024-10-28 18:18:53.538349] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@866 -- # return 0 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:37.258 Validate MD5 checksum, iteration 1 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:37.258 18:18:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:28:37.516 [2024-10-28 18:18:53.768914] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:28:37.516 [2024-10-28 18:18:53.769058] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81593 ] 00:28:37.516 [2024-10-28 18:18:53.949164] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.773 [2024-10-28 18:18:54.081570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.674  [2024-10-28T18:18:57.089Z] Copying: 447/1024 [MB] (447 MBps) [2024-10-28T18:18:57.347Z] Copying: 877/1024 [MB] (430 MBps) [2024-10-28T18:18:59.876Z] Copying: 1024/1024 [MB] (average 437 MBps) 00:28:43.398 00:28:43.398 18:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:28:43.398 18:18:59 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:45.939 Validate MD5 checksum, iteration 2 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=e4fcd7eb3288eeb38af1afccb38f958d 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ e4fcd7eb3288eeb38af1afccb38f958d != \e\4\f\c\d\7\e\b\3\2\8\8\e\e\b\3\8\a\f\1\a\f\c\c\b\3\8\f\9\5\8\d ]] 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:28:45.939 18:19:02 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:28:45.939 [2024-10-28 18:19:02.159701] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:28:45.939 [2024-10-28 18:19:02.160119] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81677 ] 00:28:45.939 [2024-10-28 18:19:02.346912] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.199 [2024-10-28 18:19:02.471276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:48.101  [2024-10-28T18:19:05.148Z] Copying: 459/1024 [MB] (459 MBps) [2024-10-28T18:19:05.716Z] Copying: 908/1024 [MB] (449 MBps) [2024-10-28T18:19:07.088Z] Copying: 1024/1024 [MB] (average 440 MBps) 00:28:50.610 00:28:50.610 18:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:28:50.610 18:19:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=c329ab633faabebef0ad3bbc9d9f6ab4 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ c329ab633faabebef0ad3bbc9d9f6ab4 != \c\3\2\9\a\b\6\3\3\f\a\a\b\e\b\e\f\0\a\d\3\b\b\c\9\d\9\f\6\a\b\4 ]] 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 81554 ]] 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 81554 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@952 -- # '[' -z 81554 ']' 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@956 -- # kill -0 81554 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # uname 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81554 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:53.138 killing process with pid 81554 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81554' 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@971 -- # kill 81554 00:28:53.138 18:19:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@976 -- # wait 81554 00:28:54.073 [2024-10-28 18:19:10.391024] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:28:54.073 [2024-10-28 18:19:10.409322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.409392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:28:54.073 [2024-10-28 18:19:10.409412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:28:54.073 [2024-10-28 18:19:10.409426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.409458] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:28:54.073 [2024-10-28 18:19:10.412813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.413009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:28:54.073 [2024-10-28 18:19:10.413038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.331 ms 00:28:54.073 [2024-10-28 18:19:10.413060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.413332] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.413353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:28:54.073 [2024-10-28 18:19:10.413367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.232 ms 00:28:54.073 [2024-10-28 18:19:10.413378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.414620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.414663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:28:54.073 [2024-10-28 18:19:10.414679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.220 ms 00:28:54.073 [2024-10-28 18:19:10.414691] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.415972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.416016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:28:54.073 [2024-10-28 18:19:10.416032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.229 ms 00:28:54.073 [2024-10-28 18:19:10.416044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.428731] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.428800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:28:54.073 [2024-10-28 18:19:10.428821] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.602 ms 00:28:54.073 [2024-10-28 18:19:10.428867] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.435570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.435752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:28:54.073 [2024-10-28 18:19:10.435782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.650 ms 00:28:54.073 [2024-10-28 18:19:10.435796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.435928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.435951] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:28:54.073 [2024-10-28 18:19:10.435965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.056 ms 00:28:54.073 [2024-10-28 18:19:10.435976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.448515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.448580] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:28:54.073 [2024-10-28 18:19:10.448599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.490 ms 00:28:54.073 [2024-10-28 18:19:10.448611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.461189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.461265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:28:54.073 [2024-10-28 18:19:10.461284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.526 ms 00:28:54.073 [2024-10-28 18:19:10.461296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.473791] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.473877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:28:54.073 [2024-10-28 18:19:10.473897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.438 ms 00:28:54.073 [2024-10-28 18:19:10.473909] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.486246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.486464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:28:54.073 [2024-10-28 18:19:10.486495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.250 ms 00:28:54.073 [2024-10-28 18:19:10.486507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.486565] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:28:54.073 [2024-10-28 18:19:10.486590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:54.073 [2024-10-28 18:19:10.486605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:28:54.073 [2024-10-28 18:19:10.486618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:28:54.073 [2024-10-28 18:19:10.486631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486718] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486779] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:54.073 [2024-10-28 18:19:10.486818] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:28:54.073 [2024-10-28 18:19:10.486829] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: 32bdcf0b-f3ee-4351-9671-39840621531e 00:28:54.073 [2024-10-28 18:19:10.486866] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:28:54.073 [2024-10-28 18:19:10.486878] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:28:54.073 [2024-10-28 18:19:10.486889] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:28:54.073 [2024-10-28 18:19:10.486901] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:28:54.073 [2024-10-28 18:19:10.486912] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:28:54.073 [2024-10-28 18:19:10.486923] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:28:54.073 [2024-10-28 18:19:10.486934] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:28:54.073 [2024-10-28 18:19:10.486944] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:28:54.073 [2024-10-28 18:19:10.486954] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:28:54.073 [2024-10-28 18:19:10.486965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.486988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:28:54.073 [2024-10-28 18:19:10.487003] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.403 ms 00:28:54.073 [2024-10-28 18:19:10.487014] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.503719] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.503781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:28:54.073 [2024-10-28 18:19:10.503800] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.673 ms 00:28:54.073 [2024-10-28 18:19:10.503814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.073 [2024-10-28 18:19:10.504354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:28:54.073 [2024-10-28 18:19:10.504376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:28:54.073 [2024-10-28 18:19:10.504390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.428 ms 00:28:54.073 [2024-10-28 18:19:10.504401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.559692] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.559760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:28:54.331 [2024-10-28 18:19:10.559779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.559791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.559880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.559899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:28:54.331 [2024-10-28 18:19:10.559912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.559924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.560059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.560080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:28:54.331 [2024-10-28 18:19:10.560094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.560105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.560129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.560152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:28:54.331 [2024-10-28 18:19:10.560180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.560192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.664351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.664664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:28:54.331 [2024-10-28 18:19:10.664696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.664710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.751246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.751514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:28:54.331 [2024-10-28 18:19:10.751546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.751560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.751694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.751714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:28:54.331 [2024-10-28 18:19:10.751727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.751738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.751799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.751816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:28:54.331 [2024-10-28 18:19:10.751869] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.751897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.752059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.752080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:28:54.331 [2024-10-28 18:19:10.752092] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.752104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.752166] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.752186] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:28:54.331 [2024-10-28 18:19:10.752199] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.752216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.752262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.752278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:28:54.331 [2024-10-28 18:19:10.752290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.752301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.752353] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:28:54.331 [2024-10-28 18:19:10.752370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:28:54.331 [2024-10-28 18:19:10.752388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:28:54.331 [2024-10-28 18:19:10.752399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:28:54.331 [2024-10-28 18:19:10.752554] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 343.197 ms, result 0 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:28:55.701 Remove shared memory files 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid81345 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:28:55.701 00:28:55.701 real 1m42.099s 00:28:55.701 user 2m26.571s 00:28:55.701 sys 0m25.255s 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:55.701 18:19:11 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:55.701 ************************************ 00:28:55.701 END TEST ftl_upgrade_shutdown 00:28:55.701 ************************************ 00:28:55.701 18:19:11 ftl -- ftl/ftl.sh@80 -- # [[ 0 -eq 1 ]] 00:28:55.701 18:19:11 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:28:55.701 18:19:11 ftl -- ftl/ftl.sh@14 -- # killprocess 73842 00:28:55.701 18:19:11 ftl -- common/autotest_common.sh@952 -- # '[' -z 73842 ']' 00:28:55.701 18:19:11 ftl -- common/autotest_common.sh@956 -- # kill -0 73842 00:28:55.702 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 956: kill: (73842) - No such process 00:28:55.702 18:19:11 ftl -- common/autotest_common.sh@979 -- # echo 'Process with pid 73842 is not found' 00:28:55.702 Process with pid 73842 is not found 00:28:55.702 18:19:11 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:28:55.702 18:19:11 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=81811 00:28:55.702 18:19:11 ftl -- ftl/ftl.sh@20 -- # waitforlisten 81811 00:28:55.702 18:19:11 ftl -- common/autotest_common.sh@833 -- # '[' -z 81811 ']' 00:28:55.702 18:19:11 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:55.702 18:19:11 ftl -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:55.702 18:19:11 ftl -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:55.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:55.702 18:19:11 ftl -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:55.702 18:19:11 ftl -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:55.702 18:19:11 ftl -- common/autotest_common.sh@10 -- # set +x 00:28:55.702 [2024-10-28 18:19:12.055177] Starting SPDK v25.01-pre git sha1 d490b5576 / DPDK 24.03.0 initialization... 00:28:55.702 [2024-10-28 18:19:12.055337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81811 ] 00:28:55.959 [2024-10-28 18:19:12.235439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:55.959 [2024-10-28 18:19:12.363202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.892 18:19:13 ftl -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:56.892 18:19:13 ftl -- common/autotest_common.sh@866 -- # return 0 00:28:56.892 18:19:13 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:28:57.150 nvme0n1 00:28:57.150 18:19:13 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:28:57.150 18:19:13 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:57.150 18:19:13 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:28:57.715 18:19:13 ftl -- ftl/common.sh@28 -- # stores=1a8e1e43-4a63-4add-8e93-a4b4f0c2945f 00:28:57.715 18:19:13 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:28:57.715 18:19:13 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1a8e1e43-4a63-4add-8e93-a4b4f0c2945f 00:28:57.973 18:19:14 ftl -- ftl/ftl.sh@23 -- # killprocess 81811 00:28:57.973 18:19:14 ftl -- common/autotest_common.sh@952 -- # '[' -z 81811 ']' 00:28:57.973 18:19:14 ftl -- common/autotest_common.sh@956 -- # kill -0 81811 00:28:57.973 18:19:14 ftl -- common/autotest_common.sh@957 -- # uname 00:28:57.973 18:19:14 ftl -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:57.973 18:19:14 ftl -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 81811 00:28:57.973 killing process with pid 81811 00:28:57.973 18:19:14 ftl -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:57.973 18:19:14 ftl -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:57.973 18:19:14 ftl -- common/autotest_common.sh@970 -- # echo 'killing process with pid 81811' 00:28:57.973 18:19:14 ftl -- common/autotest_common.sh@971 -- # kill 81811 00:28:57.973 18:19:14 ftl -- common/autotest_common.sh@976 -- # wait 81811 00:29:00.502 18:19:16 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:29:00.502 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:00.502 Waiting for block devices as requested 00:29:00.502 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:29:00.502 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:29:00.502 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:29:00.502 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:29:05.768 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:29:05.768 Remove shared memory files 00:29:05.768 18:19:21 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:29:05.768 18:19:21 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:05.768 18:19:21 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:29:05.768 18:19:21 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:29:05.768 18:19:21 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:29:05.768 18:19:21 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:05.768 18:19:21 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:29:05.768 ************************************ 00:29:05.768 END TEST ftl 00:29:05.768 ************************************ 00:29:05.768 00:29:05.768 real 12m4.225s 00:29:05.768 user 15m6.260s 00:29:05.768 sys 1m34.996s 00:29:05.768 18:19:21 ftl -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:05.768 18:19:21 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:05.768 18:19:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:05.768 18:19:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:05.768 18:19:22 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:29:05.768 18:19:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:05.768 18:19:22 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:29:05.768 18:19:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:05.768 18:19:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:05.768 18:19:22 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:29:05.768 18:19:22 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:29:05.768 18:19:22 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:29:05.768 18:19:22 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:05.768 18:19:22 -- common/autotest_common.sh@10 -- # set +x 00:29:05.768 18:19:22 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:29:05.768 18:19:22 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:29:05.768 18:19:22 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:29:05.768 18:19:22 -- common/autotest_common.sh@10 -- # set +x 00:29:07.666 INFO: APP EXITING 00:29:07.666 INFO: killing all VMs 00:29:07.666 INFO: killing vhost app 00:29:07.666 INFO: EXIT DONE 00:29:07.666 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:07.925 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:07.925 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:08.184 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:29:08.184 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:29:08.443 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:08.701 Cleaning 00:29:08.701 Removing: /var/run/dpdk/spdk0/config 00:29:08.701 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:08.701 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:08.701 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:08.701 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:08.701 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:08.701 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:08.701 Removing: /var/run/dpdk/spdk0 00:29:08.961 Removing: /var/run/dpdk/spdk_pid57751 00:29:08.961 Removing: /var/run/dpdk/spdk_pid57975 00:29:08.961 Removing: /var/run/dpdk/spdk_pid58204 00:29:08.961 Removing: /var/run/dpdk/spdk_pid58308 00:29:08.961 Removing: /var/run/dpdk/spdk_pid58353 00:29:08.961 Removing: /var/run/dpdk/spdk_pid58485 00:29:08.961 Removing: /var/run/dpdk/spdk_pid58510 00:29:08.961 Removing: /var/run/dpdk/spdk_pid58709 00:29:08.961 Removing: /var/run/dpdk/spdk_pid58814 00:29:08.961 Removing: /var/run/dpdk/spdk_pid58921 00:29:08.961 Removing: /var/run/dpdk/spdk_pid59042 00:29:08.961 Removing: /var/run/dpdk/spdk_pid59140 00:29:08.961 Removing: /var/run/dpdk/spdk_pid59185 00:29:08.961 Removing: /var/run/dpdk/spdk_pid59222 00:29:08.961 Removing: /var/run/dpdk/spdk_pid59292 00:29:08.961 Removing: /var/run/dpdk/spdk_pid59387 00:29:08.961 Removing: /var/run/dpdk/spdk_pid59864 00:29:08.961 Removing: /var/run/dpdk/spdk_pid59941 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60010 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60031 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60162 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60178 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60313 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60329 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60398 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60422 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60486 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60504 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60688 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60719 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60808 00:29:08.961 Removing: /var/run/dpdk/spdk_pid60997 00:29:08.961 Removing: /var/run/dpdk/spdk_pid61086 00:29:08.961 Removing: /var/run/dpdk/spdk_pid61134 00:29:08.961 Removing: /var/run/dpdk/spdk_pid61612 00:29:08.961 Removing: /var/run/dpdk/spdk_pid61710 00:29:08.961 Removing: /var/run/dpdk/spdk_pid61825 00:29:08.961 Removing: /var/run/dpdk/spdk_pid61878 00:29:08.961 Removing: /var/run/dpdk/spdk_pid61909 00:29:08.961 Removing: /var/run/dpdk/spdk_pid61993 00:29:08.961 Removing: /var/run/dpdk/spdk_pid62627 00:29:08.961 Removing: /var/run/dpdk/spdk_pid62669 00:29:08.961 Removing: /var/run/dpdk/spdk_pid63190 00:29:08.961 Removing: /var/run/dpdk/spdk_pid63294 00:29:08.961 Removing: /var/run/dpdk/spdk_pid63409 00:29:08.961 Removing: /var/run/dpdk/spdk_pid63462 00:29:08.961 Removing: /var/run/dpdk/spdk_pid63488 00:29:08.961 Removing: /var/run/dpdk/spdk_pid63518 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65400 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65548 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65552 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65570 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65609 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65613 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65625 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65670 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65674 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65686 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65735 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65740 00:29:08.961 Removing: /var/run/dpdk/spdk_pid65752 00:29:08.961 Removing: /var/run/dpdk/spdk_pid67131 00:29:08.961 Removing: /var/run/dpdk/spdk_pid67245 00:29:08.961 Removing: /var/run/dpdk/spdk_pid68666 00:29:08.961 Removing: /var/run/dpdk/spdk_pid70018 00:29:08.961 Removing: /var/run/dpdk/spdk_pid70133 00:29:08.961 Removing: /var/run/dpdk/spdk_pid70247 00:29:08.961 Removing: /var/run/dpdk/spdk_pid70352 00:29:08.961 Removing: /var/run/dpdk/spdk_pid70479 00:29:08.961 Removing: /var/run/dpdk/spdk_pid70559 00:29:08.961 Removing: /var/run/dpdk/spdk_pid70701 00:29:08.961 Removing: /var/run/dpdk/spdk_pid71069 00:29:08.961 Removing: /var/run/dpdk/spdk_pid71110 00:29:08.961 Removing: /var/run/dpdk/spdk_pid71583 00:29:08.961 Removing: /var/run/dpdk/spdk_pid71767 00:29:08.961 Removing: /var/run/dpdk/spdk_pid71867 00:29:08.961 Removing: /var/run/dpdk/spdk_pid71978 00:29:08.961 Removing: /var/run/dpdk/spdk_pid72027 00:29:08.961 Removing: /var/run/dpdk/spdk_pid72052 00:29:08.961 Removing: /var/run/dpdk/spdk_pid72344 00:29:08.961 Removing: /var/run/dpdk/spdk_pid72404 00:29:08.961 Removing: /var/run/dpdk/spdk_pid72478 00:29:08.961 Removing: /var/run/dpdk/spdk_pid72892 00:29:08.961 Removing: /var/run/dpdk/spdk_pid73033 00:29:08.961 Removing: /var/run/dpdk/spdk_pid73842 00:29:08.961 Removing: /var/run/dpdk/spdk_pid73991 00:29:08.961 Removing: /var/run/dpdk/spdk_pid74180 00:29:08.961 Removing: /var/run/dpdk/spdk_pid74289 00:29:08.961 Removing: /var/run/dpdk/spdk_pid74653 00:29:08.961 Removing: /var/run/dpdk/spdk_pid74933 00:29:08.961 Removing: /var/run/dpdk/spdk_pid75282 00:29:08.961 Removing: /var/run/dpdk/spdk_pid75481 00:29:09.220 Removing: /var/run/dpdk/spdk_pid75623 00:29:09.221 Removing: /var/run/dpdk/spdk_pid75687 00:29:09.221 Removing: /var/run/dpdk/spdk_pid75831 00:29:09.221 Removing: /var/run/dpdk/spdk_pid75866 00:29:09.221 Removing: /var/run/dpdk/spdk_pid75930 00:29:09.221 Removing: /var/run/dpdk/spdk_pid76139 00:29:09.221 Removing: /var/run/dpdk/spdk_pid76371 00:29:09.221 Removing: /var/run/dpdk/spdk_pid76807 00:29:09.221 Removing: /var/run/dpdk/spdk_pid77263 00:29:09.221 Removing: /var/run/dpdk/spdk_pid77706 00:29:09.221 Removing: /var/run/dpdk/spdk_pid78217 00:29:09.221 Removing: /var/run/dpdk/spdk_pid78359 00:29:09.221 Removing: /var/run/dpdk/spdk_pid78465 00:29:09.221 Removing: /var/run/dpdk/spdk_pid79121 00:29:09.221 Removing: /var/run/dpdk/spdk_pid79199 00:29:09.221 Removing: /var/run/dpdk/spdk_pid79647 00:29:09.221 Removing: /var/run/dpdk/spdk_pid80084 00:29:09.221 Removing: /var/run/dpdk/spdk_pid80691 00:29:09.221 Removing: /var/run/dpdk/spdk_pid80836 00:29:09.221 Removing: /var/run/dpdk/spdk_pid80890 00:29:09.221 Removing: /var/run/dpdk/spdk_pid80960 00:29:09.221 Removing: /var/run/dpdk/spdk_pid81022 00:29:09.221 Removing: /var/run/dpdk/spdk_pid81103 00:29:09.221 Removing: /var/run/dpdk/spdk_pid81345 00:29:09.221 Removing: /var/run/dpdk/spdk_pid81419 00:29:09.221 Removing: /var/run/dpdk/spdk_pid81491 00:29:09.221 Removing: /var/run/dpdk/spdk_pid81554 00:29:09.221 Removing: /var/run/dpdk/spdk_pid81593 00:29:09.221 Removing: /var/run/dpdk/spdk_pid81677 00:29:09.221 Removing: /var/run/dpdk/spdk_pid81811 00:29:09.221 Clean 00:29:09.221 18:19:25 -- common/autotest_common.sh@1451 -- # return 0 00:29:09.221 18:19:25 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:29:09.221 18:19:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:09.221 18:19:25 -- common/autotest_common.sh@10 -- # set +x 00:29:09.221 18:19:25 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:29:09.221 18:19:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:09.221 18:19:25 -- common/autotest_common.sh@10 -- # set +x 00:29:09.221 18:19:25 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:09.221 18:19:25 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:09.221 18:19:25 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:09.221 18:19:25 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:29:09.221 18:19:25 -- spdk/autotest.sh@394 -- # hostname 00:29:09.221 18:19:25 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:09.480 geninfo: WARNING: invalid characters removed from testname! 00:29:41.546 18:19:53 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:41.546 18:19:57 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:44.827 18:20:00 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:47.357 18:20:03 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:50.637 18:20:06 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:53.210 18:20:09 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:56.493 18:20:12 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:56.493 18:20:12 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:56.493 18:20:12 -- common/autotest_common.sh@736 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:56.493 18:20:12 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:56.493 18:20:12 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:56.493 18:20:12 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:56.493 + [[ -n 5295 ]] 00:29:56.493 + sudo kill 5295 00:29:56.501 [Pipeline] } 00:29:56.517 [Pipeline] // timeout 00:29:56.523 [Pipeline] } 00:29:56.539 [Pipeline] // stage 00:29:56.545 [Pipeline] } 00:29:56.562 [Pipeline] // catchError 00:29:56.572 [Pipeline] stage 00:29:56.574 [Pipeline] { (Stop VM) 00:29:56.586 [Pipeline] sh 00:29:56.864 + vagrant halt 00:30:01.066 ==> default: Halting domain... 00:30:06.339 [Pipeline] sh 00:30:06.618 + vagrant destroy -f 00:30:10.849 ==> default: Removing domain... 00:30:11.119 [Pipeline] sh 00:30:11.396 + mv output /var/jenkins/workspace/nvme-vg-autotest_2/output 00:30:11.404 [Pipeline] } 00:30:11.419 [Pipeline] // stage 00:30:11.424 [Pipeline] } 00:30:11.441 [Pipeline] // dir 00:30:11.446 [Pipeline] } 00:30:11.461 [Pipeline] // wrap 00:30:11.465 [Pipeline] } 00:30:11.476 [Pipeline] // catchError 00:30:11.484 [Pipeline] stage 00:30:11.485 [Pipeline] { (Epilogue) 00:30:11.497 [Pipeline] sh 00:30:11.774 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:19.980 [Pipeline] catchError 00:30:19.983 [Pipeline] { 00:30:19.996 [Pipeline] sh 00:30:20.272 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:20.530 Artifacts sizes are good 00:30:20.539 [Pipeline] } 00:30:20.553 [Pipeline] // catchError 00:30:20.564 [Pipeline] archiveArtifacts 00:30:20.571 Archiving artifacts 00:30:20.678 [Pipeline] cleanWs 00:30:20.685 [WS-CLEANUP] Deleting project workspace... 00:30:20.685 [WS-CLEANUP] Deferred wipeout is used... 00:30:20.689 [WS-CLEANUP] done 00:30:20.692 [Pipeline] } 00:30:20.704 [Pipeline] // stage 00:30:20.708 [Pipeline] } 00:30:20.717 [Pipeline] // node 00:30:20.720 [Pipeline] End of Pipeline 00:30:20.743 Finished: SUCCESS